diff --git a/spaces/101-5/gpt4free/g4f/.v1/testing/usesless_test.py b/spaces/101-5/gpt4free/g4f/.v1/testing/usesless_test.py deleted file mode 100644 index e2e35547b0553611f321ab571f6af57def748807..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/.v1/testing/usesless_test.py +++ /dev/null @@ -1,13 +0,0 @@ -import usesless - -question1 = "Who won the world series in 2020?" -req = usesless.Completion.create(prompt=question1) -answer = req["text"] -message_id = req["parentMessageId"] - -question2 = "Where was it played?" -req2 = usesless.Completion.create(prompt=question2, parentMessageId=message_id) -answer2 = req2["text"] - -print(answer) -print(answer2) diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anthony Hamilton The Point Of It All Full Album Zip.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anthony Hamilton The Point Of It All Full Album Zip.md deleted file mode 100644 index c082bae466c4ef4d6ef33b4b07cb70ad0e30dfc6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anthony Hamilton The Point Of It All Full Album Zip.md +++ /dev/null @@ -1,22 +0,0 @@ -
-

Anthony Hamilton: The Point Of It All - A Soulful Masterpiece

-

If you are looking for a soulful and heartfelt R&B album, you should definitely check out Anthony Hamilton's The Point Of It All. This album, released in 2008, showcases Hamilton's smooth vocals, honest lyrics, and musical versatility. Whether you want to groove to some upbeat tracks, relax to some mellow tunes, or get inspired by some uplifting messages, this album has it all.

-

The Point Of It All is Hamilton's fourth studio album and his most successful one to date. It debuted at number 12 on the Billboard 200 chart and number one on the Top R&B/Hip-Hop Albums chart. It also received a Grammy nomination for Best R&B Album in 2010. The album features 16 tracks, including the singles "Cool", "The Point Of It All", and "Pray For Me".

-

Anthony Hamilton, The Point Of It All full album zip


Download ✒ ✒ ✒ https://byltly.com/2uKzlT



-

Some of the highlights of the album are:

- -

If you want to listen to Anthony Hamilton's The Point Of It All full album zip, you can download it from various online platforms or stream it on YouTube[^3^]. You won't regret it!

- -

Anthony Hamilton is not only a talented singer, but also a prolific songwriter and record producer. He has co-written and produced songs for many artists, such as Donell Jones, Sunshine Anderson, Jill Scott, Angie Stone, John Legend, and Al Green. He has also collaborated with rappers like Jadakiss, 2Pac, Nappy Roots, and David Banner. Hamilton's music has been featured in several movies and TV shows, such as Django Unchained, American Gangster, The Best Man Holiday, and Empire.

-

Hamilton was born on January 28, 1971, in Charlotte, North Carolina[^1^]. He started singing in his church choir when he was 17 years old. He attended South Mecklenburg High School, where he sang for his school's award-winning choir[^1^]. In 1993, he moved to New York City and signed with Uptown Records, but his debut album was shelved when the label went out of business. He then moved to MCA Records and released his first album XTC in 1996, but it failed to make an impact. He later joined Soulife Records and recorded another album that was also unreleased. He then became a backup singer for D'Angelo and wrote songs for other artists. His breakthrough came in 2002 when he sang on Nappy Roots' hit single "Po' Folks", which earned him a Grammy nomination[^2^]. He then signed with Jermaine Dupri's So So Def Records and released his second album Comin' from Where I'm From in 2003, which went platinum and spawned the hits "Comin' from Where I'm From" and "Charlene".

-

Since then, Hamilton has released several more albums that have received critical acclaim and commercial success. His third album Ain't Nobody Worryin' (2005) featured the singles "Can't Let Go" and "Sista Big Bones". His fourth album The Point Of It All (2008) included the songs "Cool", "The Point Of It All", and "Pray For Me". His fifth album Back to Love (2011) had the tracks "Woo", "Best of Me", and "Pray for Me". His sixth album What I'm Feelin' (2016) showcased the songs "Amen", "Save Me", and "Ever Seen Heaven". His seventh album Love Is the New Black (2021) contained the songs "You Made a Fool of Me", "Love Is the New Black", and "Mercy" [^2^]. Hamilton has won one Grammy Award for his duet with Al Green on "You've Got the Love I Need" in 2009 and has been nominated for 16 more [^2^].

-

-

Anthony Hamilton is one of the most respected and influential R&B artists of his generation. He has a distinctive voice that blends soul, gospel, blues, and hip-hop. He has a loyal fan base that appreciates his authentic and relatable music. He has a remarkable career that spans over two decades and shows no signs of slowing down. He is truly a soulful masterpiece.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk 2016 All Products Patch Keygen -X-Force Zip _BEST_.md b/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk 2016 All Products Patch Keygen -X-Force Zip _BEST_.md deleted file mode 100644 index 55a020067445c0f3271e1485b4f9601e03e0c0be..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk 2016 All Products Patch Keygen -X-Force Zip _BEST_.md +++ /dev/null @@ -1,7 +0,0 @@ - -

If you still havent found what you are looking for, try our other categories, location, keywords, Autodesk, licensing, files, disk images, emulators, drivers, Autodesk, software, or more! This Unlocked file is only for use with the. Autodesk based products, such as AutoCAD, Revit, 3ds Max, Maya, FreeCAD, and Inventor.

-

Because the company releases the AutoCAD 2016 and student and professional editions of other products such as AutoCAD, 3ds Max, Maya, Inventor at regular intervals, we can use these to create unlimited licensed keys for your own free use. This keygen is not for sale, and is only used for testing. This is the latest license key patch, so you may use this.

-

Autodesk 2016 All Products Patch Keygen -X-Force Zip


DOWNLOAD ☆☆☆ https://imgfil.com/2uy0YH



-

Summary of features in Autodesk 2016 All Products Patch. Comprehensive, free solution to. 'Revert' the previous patch. Autodesk License Patcher allows you to unlock, activate, and uninstall the serial. Autodesk Design Review. Website, www.autodesk.com/products/3ds-max/overview. Autodesk 2014 All Products Universal Keygen For Windows & Mac... Autodesk. Generate, Update, Protect and Validate License Keys for Software. Updated 9/25/2015: ARCHIVE FOR DOWNLOAD DATE 9/25/2015. Update All Products. Autodesk 2016 All Products Patch Keygen -X-Force Zip . Product - 2016. Product - 2013. Official Autodesk Products Site - Home - Unofficial Autodesk 2016 All Products Patch Keygen -X-Force Zip . Autodesk Design Review. Autodesk 2014 All Products. Generation, Update and. Keygens. Autodesk 2007 All Products Unofficial Full Patch. Autodesk. Server Connection. Key Generation. Backup|Generate, Update and. . *Click on Mem Patch (you should see successfully patched) Autodesk 2016 All Products Patch Keygen -X-Force Zip . Autodesk Software at Autodesk. Autodesk.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Crack File For Sap 2000 V15 16 Fix.md b/spaces/1gistliPinn/ChatGPT4/Examples/Crack File For Sap 2000 V15 16 Fix.md deleted file mode 100644 index 792c7a1e0806669742d2f2dafbb85591109c6016..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Crack File For Sap 2000 V15 16 Fix.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

a new study has found that the leading cause of a fire in a health-care facility is "neglect" (22 percent of the time) and that the leading cause of a fire in a home is "accidental combustion" (50 percent of the time). this became evident when, as part of a health-care fire drill, the owners of a hospital found a fire in a hallway and called 911. firefighters promptly arrived on scene, but in their rush, the first thing they did was to set up a four-sided "portable corridor" that doubled as a human flammability barrier. that, in turn, displaced the ambulances with the patients. "there were 700 patients housed in the building at the time of the fire.

-

there is a clear documentation of when to use these options to make system changes and when to use them to simply create and operate a new system. if you're not following the documentation properly, you might lock yourself out of the system. in the example above, if you were to use the reset command incorrectly, you would find you could not log on to the new system.

-

Crack File For Sap 2000 V15 16


Download Ziphttps://imgfil.com/2uxZeF



-

the restorex command is used to protect against data corruption and accidental formatting. the restorex program is not a backup program; it is part of the unix core services that will attempt to fix errors. the restorex can be used to recover from suse linux enterprise software or any operating system, regardless of the backup method. restorex will only recover files that have been created or altered since the last time the file system was consistency checked. you may use this option to recover corrupt files. it's not a full backup recovery, but it's a great tool to have in your backup toolbelt.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download AutoCAD P ID 2010 Portable 64 Bit LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download AutoCAD P ID 2010 Portable 64 Bit LINK.md deleted file mode 100644 index c37120cb589a798a058415c6914e3b8348e8ba2f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download AutoCAD P ID 2010 Portable 64 Bit LINK.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

in addition to this, there are many more features that make autocad much more user friendly and also easier to use. among those include the use of graphics and icons to make the interface much more user friendly and the use of a mouse instead of a keyboard to make the interface much more intuitive.

-

Download AutoCAD P ID 2010 Portable 64 Bit


Download Zip 🗸 https://imgfil.com/2uy17g



-

the interface is also more compact, and you can toggle through different functions on your screen by hitting an icon at the top for the specific function you want to use. autocad p-tech also has a more robust architecture and the features that make the traditional autocad user interface much more intuitive. you can learn more about p-tech at autodesk's autocad p-tech page.

-

hi, i have downloaded the autocad p 2020 portable version from but i don't know how to install it on my computer. i have windows 10 and i'm using windows 7 home edition. i have no idea how to install it. can someone please help me? thanks so much in advance!

-

i am having problems with installing autocad portable 2020 on my 64-bit computer. when i start the installation, it first shows a screen of two different versions: one called "home" and the other called "portable". i can't understand why this happens and what the difference between the two versions of the program is. could someone help me with this?

-

-

thank you so much. i've downloaded the autocad portable 2020 from the website and i've installed it. the problem is that every time i try to open it, it doesn't open. i can't understand why. i can't even find the autocad or autocad.exe file that should be in the "program files" folder. how do i fix this?

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1line/AutoGPT/tests/test_token_counter.py b/spaces/1line/AutoGPT/tests/test_token_counter.py deleted file mode 100644 index 6d7ae016b2f823123b0b69b2eeb3eab50d94f00f..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/tests/test_token_counter.py +++ /dev/null @@ -1,63 +0,0 @@ -import unittest - -import tests.context -from autogpt.token_counter import count_message_tokens, count_string_tokens - - -class TestTokenCounter(unittest.TestCase): - def test_count_message_tokens(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_with_name(self): - messages = [ - {"role": "user", "content": "Hello", "name": "John"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_empty_input(self): - self.assertEqual(count_message_tokens([]), 3) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(KeyError): - count_message_tokens(messages, model="invalid_model") - - def test_count_message_tokens_gpt_4(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages, model="gpt-4-0314"), 15) - - def test_count_string_tokens(self): - string = "Hello, world!" - self.assertEqual( - count_string_tokens(string, model_name="gpt-3.5-turbo-0301"), 4 - ) - - def test_count_string_tokens_empty_input(self): - self.assertEqual(count_string_tokens("", model_name="gpt-3.5-turbo-0301"), 0) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(NotImplementedError): - count_message_tokens(messages, model="invalid_model") - - def test_count_string_tokens_gpt_4(self): - string = "Hello, world!" - self.assertEqual(count_string_tokens(string, model_name="gpt-4-0314"), 4) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Agar.io for Windows 10 and Enjoy the Multiplayer Action.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Agar.io for Windows 10 and Enjoy the Multiplayer Action.md deleted file mode 100644 index 9c9906787b9acd4ecf3c6478b16f85aa789268b9..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Agar.io for Windows 10 and Enjoy the Multiplayer Action.md +++ /dev/null @@ -1,134 +0,0 @@ - -

Agar.io Download Windows 10: How to Play the Popular Online Game on Your PC

-

If you are looking for a simple but addictive online game that you can play with millions of other players around the world, you might want to try Agar.io. This game has been around since 2015, but it is still one of the most popular games on the internet. In this article, we will show you what Agar.io is, why it is so fun, and how you can download and install it on your Windows 10 PC.

-

agar.io download windows 10


DOWNLOAD ❤❤❤ https://urlin.us/2uT1tk



-

What is Agar.io and Why is it So Fun?

-

Agar.io is a multiplayer online game that starts you out as one tiny cell in a huge map. Your goal is to grow bigger by eating smaller cells, while avoiding being eaten by bigger cells. You can also split your cell into smaller pieces to move faster or to escape from predators. The game is simple but challenging, as you have to balance between growing and surviving.

-

One of the reasons why Agar.io is so fun is that you can play with other players from different countries and regions. You can join different servers based on your location, or create your own private room to play with your friends. You can also chat with other players using emojis or custom messages. The game is constantly updated with new features and content, such as new game modes, skins, events, and leaderboards.

-

How to Download and Install Agar.io on Windows 10

-

There are two main ways to download and install Agar.io on your Windows 10 PC. One is using an emulator, which allows you to run Android apps on your computer. The other is using a website that offers free downloads of PC games. We will explain both options below.

-

Option 1: Using BlueStacks Emulator

-

BlueStacks is one of the most popular and trusted emulators that you can use to play Android games on your PC. It has many features and advantages that make it a great choice for playing Agar.io on your computer. Here are the steps to follow:

-

Step 1: Download and Install BlueStacks

-

To download BlueStacks, you can visit their official website here. You will see a button that says "Download Agar.io on PC". Click on it and the download will start automatically. Once the download is complete, run the installer and follow the instructions to install BlueStacks on your PC.

-

agar.io pc download windows 10
-agar.io game download for windows 10
-agar.io free download windows 10
-agar.io download for pc windows 10
-agar.io download windows 10 64 bit
-agar.io offline download windows 10
-agar.io online download windows 10
-agar.io app download windows 10
-agar.io download for laptop windows 10
-agar.io download for desktop windows 10
-agar.io mod download windows 10
-agar.io hack download windows 10
-agar.io skins download windows 10
-agar.io bots download windows 10
-agar.io cheats download windows 10
-agar.io emulator download windows 10
-agar.io apk download windows 10
-agar.io exe download windows 10
-agar.io installer download windows 10
-agar.io setup download windows 10
-agar.io full version download windows 10
-agar.io latest version download windows 10
-agar.io update download windows 10
-agar.io patch download windows 10
-agar.io crack download windows 10
-agar.io play on pc windows 10
-agar.io play online on windows 10
-agar.io play offline on windows 10
-agar.io play with friends on windows 10
-agar.io play with bots on windows 10
-agar.io play with skins on windows 10
-agar.io play with mods on windows 10
-agar.io play with hacks on windows 10
-agar.io play with cheats on windows 10
-agar.io play on laptop windows 10
-agar.io play on desktop windows 10
-agar.io play on emulator windows 10
-agar.io play on apk windows 10
-agar.io play on exe windows 10
-agar.io play on installer windows 10
-how to download and install agar.io on windows 10
-how to download and play agar.io on windows 10
-how to download and update agar.io on windows 10
-how to download and patch agar.io on windows 10
-how to download and crack agar.io on windows 10
-how to run and play agar.io on windows 10
-how to run and update agar.io on windows 10
-how to run and patch agar.io on windows 10
-how to run and crack agar.io on windows 10

-

Step 2: Launch BlueStacks and Log in to Google Play Store

-

After installing BlueStacks, launch it from your desktop or start menu. You will see a welcome screen that asks you to log in to your Google account. This is necessary to access the Google Play Store, where you can find and install Android apps. If you don't have a Google account, you can create one for free.

-

Step 3: Search for Agar.io and Install it

-

Once you are logged

Once you are logged in to the Google Play Store, you can search for Agar.io in the search bar. You will see the Agar.io app icon with a green background and a white dot. Click on it and then click on the "Install" button. The app will be downloaded and installed on your PC.

-

Step 4: Enjoy Playing Agar.io on PC

-

After installing Agar.io, you can launch it from the BlueStacks home screen or the app drawer. You will see the Agar.io logo and then the game will start. You can use your mouse to move your cell around the map, and use the space bar to split your cell or the W key to eject some mass. You can also customize your settings, such as your nickname, skin, game mode, and chat options.

-

Option 2: Using GameTop Website

-

GameTop is a website that offers free downloads of PC games, including Agar.io. It is a safe and reliable source that does not contain any viruses or malware. Here are the steps to follow:

-

Step 1: Visit GameTop Website and Click on Agar.io

-

To visit GameTop website, you can click here. You will see a list of categories of games, such as action, arcade, puzzle, racing, and more. Scroll down until you find the category "Online Games". Under this category, you will see Agar.io with a blue background and a white dot. Click on it and you will be redirected to the Agar.io game page.

-

Step 2: Download and Install Agar.io for PC

-

On the Agar.io game page, you will see a button that says "Download Free Full Version". Click on it and the download will start automatically. Once the download is complete, run the installer and follow the instructions to install Agar.io on your PC.

-

Step 3: Launch Agar.io and Start Playing

-

After installing Agar.io, you can launch it from your desktop or start menu. You will see the Agar.io logo and then the game will start. You can use your mouse to move your cell around the map, and use the space bar to split your cell or the W key to eject some mass. You can also customize your settings, such as your nickname, skin, game mode, and chat options.

-

Tips and Tricks for Playing Agar.io on PC

-

Now that you know how to download and install Agar.io on your Windows 10 PC, you might want to learn some tips and tricks to improve your gameplay and have more fun. Here are some of them:

-

Use Keyboard Shortcuts for Better Control

-

One of the advantages of playing Agar.io on PC is that you can use keyboard shortcuts to perform certain actions faster and easier. Here are some of the keyboard shortcuts that you can use:

- - - - - - - - - - - - - - - - - - -< -
KeyAction
SpaceSplit your cell into two pieces
WEject some mass from your cell
EEject mass continuously (hold down)
RSplit multiple times (hold down)
TLock or unlock your mouse cursor
PPause or resume the game
MMute or unmute the sound effects
SShow or hide skins
NShow or hide names
HShow or hide chat messages
CShow or hide colors
GShow or hide grid lines
AShow or hide mass numbers
F11Enter or exit full screen mode
F12Take a screenshot of the game window
F5Refresh or restart the game page
ESCExit the game and return to the main menu
-

You can also change the keyboard shortcuts in the settings menu if you prefer different keys.

-

Adjust the Graphics Settings for Optimal Performance

-

Another advantage of playing Agar.io on PC is that you can adjust the graphics settings to suit your preferences and your computer's capabilities. You can find the graphics settings in the settings menu, under the "Graphics" tab. Here are some of the options that you can tweak:

- -

You can experiment with different combinations of graphics settings until you find the optimal balance between quality and performance for your PC.

-

Try Different Game Modes and Skins for More Variety

-

One of the reasons why Agar.io is so popular is that it offers a lot of variety and customization options for players. You can try different game modes and skins to spice up your gameplay and express your personality. Here are some of the game modes and skins that you can choose from:

- -

You can change your game mode and skin in the main menu before starting a game. You can also switch between different servers and regions to play with different players.

-

Conclusion

-

Agar.io is a fun and addictive online game that you can play with millions of other players around the world. It is easy to download and install on your Windows 10 PC using either an emulator or a website. You can also improve your gameplay and have more fun by using keyboard shortcuts, adjusting graphics settings, and trying different game modes and skins. If you are looking for a simple but challenging game that will keep you entertained for hours, you should give Agar.io a try.

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Lords 2 Guild Castle Mod Apk and Join the Epic Battle of Heroes and Fiends.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Lords 2 Guild Castle Mod Apk and Join the Epic Battle of Heroes and Fiends.md deleted file mode 100644 index 1b7f37f9324ee55bbd80f0ad0d7abe8502239bed..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Lords 2 Guild Castle Mod Apk and Join the Epic Battle of Heroes and Fiends.md +++ /dev/null @@ -1,99 +0,0 @@ - -

Download Clash of Lords 2 Guild Castle Mod Apk: A Strategy Game with Fun and Innovative Twists

-

If you are looking for a strategy game that is different from the usual ones, you might want to try Clash of Lords 2 Guild Castle. This game is a sequel to the original Clash of Lords, and it has a lot of new features and improvements that make it more fun and exciting. In this game, you can recruit over 50 heroes and their mercenaries, build and defend a base, and fight alongside your friends in over 10 PvE and PvP modes. You can also control the action and activate heroes' skills in real time, pair heroes and troops with a unique mercenary system, and play it your way with various options and challenges.

-

But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money and gems, free hero hires and arena challenges, no ads and root required, and more? Well, you can do that by downloading Clash of Lords 2 Guild Castle mod apk. This is a modified version of the game that gives you access to all the premium features and benefits that you would normally have to pay for or work hard for. In this article, we will tell you what is Clash of Lords 2 Guild Castle mod apk, why you should download it, how to download and install it, and some tips and tricks for playing the game.

-

download clash of lords 2 guild castle mod apk


DOWNLOAD 🔗 https://urlin.us/2uSVPx



-

What is Clash of Lords 2 Guild Castle?

-

Clash of Lords 2 Guild Castle is a strategy game developed by IGG.COM, a company that is known for creating popular games such as Lords Mobile, Castle Clash, Mobile Royale, etc. The game was released in May 2014 for Android devices, and in June 2014 for iOS devices. Since then, it has been downloaded over 10 million times on Google Play Store alone, and has received positive reviews from players and critics alike. The game has also been updated regularly with new content, features, events, heroes, etc.

-

Game features

-

Some of the main features of Clash of Lords 2 Guild Castle are:

-

How to install clash of lords 2 guild castle mod apk on android
-Clash of lords 2 guild castle mod apk unlimited money and gems
-Clash of lords 2 guild castle mod apk latest version download
-Clash of lords 2 guild castle mod apk offline mode
-Clash of lords 2 guild castle mod apk free shopping
-Clash of lords 2 guild castle mod apk hack cheats
-Clash of lords 2 guild castle mod apk no root required
-Clash of lords 2 guild castle mod apk gameplay and review
-Clash of lords 2 guild castle mod apk features and benefits
-Clash of lords 2 guild castle mod apk download link and instructions
-Clash of lords 2 guild castle mod apk best heroes and strategies
-Clash of lords 2 guild castle mod apk tips and tricks
-Clash of lords 2 guild castle mod apk update and news
-Clash of lords 2 guild castle mod apk comparison with original game
-Clash of lords 2 guild castle mod apk pros and cons
-Clash of lords 2 guild castle mod apk for PC and laptop
-Clash of lords 2 guild castle mod apk online multiplayer mode
-Clash of lords 2 guild castle mod apk support and feedback
-Clash of lords 2 guild castle mod apk ratings and reviews
-Clash of lords 2 guild castle mod apk screenshots and videos
-Clash of lords 2 guild castle mod apk system requirements and compatibility
-Clash of lords 2 guild castle mod apk download size and speed
-Clash of lords 2 guild castle mod apk bugs and issues
-Clash of lords 2 guild castle mod apk alternatives and similar games
-Clash of lords 2 guild castle mod apk developer and publisher information

- -

Game modes

-

Some of the game modes that you can play in Clash of Lords 2 Guild Castle are:

- -

Why download Clash of Lords 2 Guild Castle mod apk?

-

Clash of Lords 2 Guild Castle is a fun and addictive game, but it can also be frustrating and time-consuming if you don't have enough resources, heroes, or options. That's why you might want to download Clash of Lords 2 Guild Castle mod apk, which is a modified version of the game that gives you a lot of advantages and benefits that you won't get from the original version. Here are some of the reasons why you should download Clash of Lords 2 Guild Castle mod apk:

-

Unlimited money and gems

-

Money and gems are the main currencies in the game, and you need them to buy items, upgrade buildings, hire heroes, etc. However, they are not easy to come by, and you might have to spend real money or watch ads to get them. But with Clash of Lords 2 Guild Castle mod apk, you don't have to worry about that, because you will have unlimited money and gems at your disposal. You can use them to buy anything you want, upgrade anything you need, and hire any hero you like. You can also use them to speed up the game progress and skip the waiting time.

-

Free hero hires and arena challenges

-

Heroes are the most important part of the game, as they are the ones who lead your troops into battle and use their skills to turn the tide of the war. However, hiring heroes is not cheap, and you might have to spend a lot of money and gems to get them. Moreover, some heroes are only available through arena challenges, which are limited and require tickets to enter. But with Clash of Lords 2 Guild Castle mod apk, you don't have to worry about that, because you will be able to hire any hero for free, and enter any arena challenge without tickets. You can also switch between heroes anytime you want, and experiment with different combinations and strategies.

-

No ads and root required

-

Ads are annoying and distracting, especially when they pop up in the middle of the game or when you are trying to do something important. They can also slow down your device and consume your data. But with Clash of Lords 2 Guild Castle mod apk, you don't have to worry about that, because there will be no ads in the game at all. You can enjoy the game without any interruptions or disturbances. Moreover, some mod apk files require root access to work properly, which can be risky and complicated for your device. But with Clash of Lords 2 Guild Castle mod apk, you don't have to worry about that, because it does not require root access at all. You can install it easily and safely on any Android device.

-

How to download and install Clash of Lords 2 Guild Castle mod apk?

-

If you are convinced by the benefits of downloading Clash of Lords 2 Guild Castle mod apk, you might be wondering how to do it. Well, it's not hard at all, and it only takes a few minutes. Here are the steps that you need to follow:

-

Step 1: Enable unknown sources

-

The first thing that you need to do is enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than Google Play Store. To do this, go to your device settings > security > unknown sources > enable.

-

Step 2: Download the mod apk file

-

The next thing that you need to do is download the mod apk file from a reliable source. You can search for it online or use this link: Clash of Lords 2 Guild Castle Mod Apk Download. Make sure that the file is compatible with your device version and has no viruses or malware.

-

Step 3: Install the mod apk file

-

The last thing that you need to do is install the mod apk file on your device. To do this, locate the file in your file manager or downloads folder, and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.

-

Step 4: Launch the game and enjoy

-

The final thing that you need to do is launch the game and enjoy it. To do this, find the game icon on your home screen or app drawer, and tap on it to open it. You will see a mod menu where you can customize your settings and preferences. You can also access all the features and benefits of the mod apk, such as unlimited money and gems, free hero hires and arena challenges, no ads and root required, etc. You can now play the game with fun and innovative twists.

-

Tips and tricks for playing Clash of Lords 2 Guild Castle

-

Now that you have downloaded and installed Clash of Lords 2 Guild Castle mod apk, you might want to know some tips and tricks for playing the game better and smarter. Here are some of them:

-

Upgrade your town hall and troops

-

One of the most important things that you need to do in the game is upgrade your town hall and troops. Your town hall is the heart of your base, and it determines your level, resource capacity, building limit, etc. Your troops are your main force in battle, and they determine your attack power, defense power, speed, etc. Therefore, you need to upgrade them regularly to unlock new features, abilities, options, etc. You can use your money and gems to speed up the upgrading process.

-

Fuse your heroes to make them stronger

-

Another important thing that you need to do in the game is fuse your heroes to make them stronger. Your heroes are your leaders in battle, and they have unique skills that can change the outcome of the war. However, they also have different levels, ranks, grades, etc. that affect their performance. Therefore, you need to fuse them with other heroes or materials to increase their stats, skills, stars, etc. You can use your money and gems to buy more heroes or materials for fusion.

-

Plan your strategy before heading to battle

-

A third important thing that you need to do in the game is plan your strategy before heading to battle. Your strategy is your plan of action in battle, and it involves choosing your heroes, troops, mercenaries, formations, etc. You need to consider various factors such as your enemy's strength, weakness, type, etc., as well as your own goals, objectives, resources, etc. You also need to adapt your strategy according to the game mode that you are playing. You can use your money and gems to change your strategy anytime you want.

-

Join a guild and fight alongside your friends

-

A fourth important thing that you need to do in the game is join a guild and fight alongside your friends. A guild is a group of players who share a common interest in the game, and who can help each other out with various aspects of the game such as resources, advice, support, etc. You can also participate in guild wars and guild bashes with your guild members, where you can compete with other guilds for glory and rewards. You can use your money and gems to donate to your guild or buy guild gifts.

-

Conclusion

-

In conclusion, Clash of Lords 2 Guild Castle is a strategy game with fun and innovative twists that you can enjoy on your Android device. You can recruit over 50 heroes and their mercenaries, build and defend a base, and fight alongside your friends in over 10 PvE and PvP modes. You can also control the action and activate heroes' skills in real time, pair heroes and troops with a unique mercenary system, and play it your way with various options and challenges. However, if you want to have more fun and freedom in the game, you should download Clash of Lords 2 Guild Castle mod apk, which is a modified version of the game that gives you unlimited money and gems, free hero hires and arena challenges, no ads and root required, and more. You can download and install it easily and safely on your device by following the steps that we have provided in this article. You can also use some tips and tricks that we have shared to play the game better and smarter. We hope that you have enjoyed this article and found it helpful. If you have any questions or feedback, please feel free to leave them in the comments section below.

-

FAQs

-

Here are some frequently asked questions about Clash of Lords 2 Guild Castle mod apk:

-

Q: Is Clash of Lords 2 Guild Castle mod apk safe to use?

-

A: Yes, Clash of Lords 2 Guild Castle mod apk is safe to use, as long as you download it from a reliable source and scan it for viruses or malware before installing it. However, you should always be careful when downloading and installing any mod apk file, as some of them might contain harmful or malicious code that can damage your device or compromise your privacy. You should also backup your data before using any mod apk file, as some of them might overwrite or delete your original game data.

-

Q: Is Clash of Lords 2 Guild Castle mod apk compatible with my device?

-

A: Clash of Lords 2 Guild Castle mod apk is compatible with most Android devices that run on Android 4.1 or higher. However, some devices might not support the mod apk file due to different specifications or settings. You should check the compatibility of the mod apk file with your device before downloading and installing it. You should also make sure that you have enough storage space on your device to install the mod apk file.

-

Q: Will I get banned for using Clash of Lords 2 Guild Castle mod apk?

-

A: There is a possibility that you might get banned for using Clash of Lords 2 Guild Castle mod apk, as it is against the terms of service and fair play policy of the game. The game developers might detect the use of the mod apk file and suspend or terminate your account. Therefore, you should use the mod apk file at your own risk and discretion. You should also avoid using the mod apk file in online or multiplayer modes, as it might affect the game balance and fairness for other players.

-

Q: Can I update Clash of Lords 2 Guild Castle mod apk?

-

A: Yes, you can update Clash of Lords 2 Guild Castle mod apk, but you might lose some of the features or benefits of the mod apk file. The game developers might release new updates for the original version of the game that might not be compatible with the mod apk file. Therefore, you should check the compatibility of the update with the mod apk file before downloading and installing it. You should also backup your data before updating the mod apk file, as some updates might overwrite or delete your previous game data.

-

Q: Can I uninstall Clash of Lords 2 Guild Castle mod apk?

-

A: Yes, you can uninstall Clash of Lords 2 Guild Castle mod apk anytime you want, just like any other app on your device. To do this, go to your device settings > apps > Clash of Lords 2 Guild Castle > uninstall. However, you should note that uninstalling the mod apk file will also delete all your game data and progress. Therefore, you should backup your data before uninstalling the mod apk file.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Brain Test 3 Tricky Quests Mod APK A Long and Tricky Adventure with Brain Teasing Challenges.md b/spaces/1phancelerku/anime-remove-background/Brain Test 3 Tricky Quests Mod APK A Long and Tricky Adventure with Brain Teasing Challenges.md deleted file mode 100644 index 318db38127f75a5db9962ca55e602dfe53786b66..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Brain Test 3 Tricky Quests Mod APK A Long and Tricky Adventure with Brain Teasing Challenges.md +++ /dev/null @@ -1,96 +0,0 @@ - -

Brain Test 3: Tricky Quests Mod APK - A Fun and Challenging Puzzle Game

-

If you are looking for a game that will test your brain power, make you laugh, and keep you entertained, then you should try Brain Test 3: Tricky Quests. This is a puzzle game with dozens of tricky questions and puzzles accompanied by colorful characters with original stories. In this game, you will join Alyx on her quest to find the six power gems in order to save her dying father. Along the way, you will meet with Brain Test franchise characters and face various challenges and dangers. You will need to use your logic, creativity, and intuition to solve the puzzles and shape the story.

-

Brain Test 3 is the third installment in the popular Brain Test series, which has been downloaded by millions of players worldwide. The game is developed by Unico Studio, a game development studio based in the United States. The game is available for both Android and iOS devices, but if you want to enjoy some extra features and benefits, you should download Brain Test 3 Mod APK. This is a modified version of the game that gives you unlimited resources, no ads, and all levels unlocked. In this article, we will tell you how to download and install Brain Test 3 Mod APK, what are its features, how to play it, what are its benefits, and what are some tips and tricks for playing it.

-

brain test 3 tricky quests mod apk


DOWNLOAD ····· https://jinyurl.com/2uNMO5



-

How to Download and Install Brain Test 3 Mod APK

-

Downloading and installing Brain Test 3 Mod APK is very easy and simple. Just follow these steps:

-
    -
  1. Download the mod APK file from a trusted source. You can use this link to download it.
  2. -
  3. Enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store or App Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  4. -
  5. Install the mod APK file by tapping on it and following the instructions. It may take a few seconds or minutes depending on your device.
  6. -
  7. Launch the game and enjoy!
  8. -
-

What are the Features of Brain Test 3 Mod APK?

-

Brain Test 3 Mod APK has some amazing features that will make your gaming experience more enjoyable and rewarding. Here are some of them:

- -

What are the Benefits of Playing Brain Test 3: Tricky Quests?

-

Brain Test 3 is not only a fun and entertaining game, but also a beneficial one. Playing Brain Test 3 can help you in many ways, such as:

- -

What are some Tips and Tricks for Playing Brain Test 3: Tricky Quests?

-

Brain Test 3 is a game that will test your brain power, make you laugh, and keep you entertained. However, some of the puzzles may be too tricky or difficult for you. Here are some tips and tricks that may help you play better:

- -

Conclusion

-

Brain Test 3: Tricky Quests is a fun and challenging puzzle game that will test your brain power, make you laugh, and keep you entertained. The game has over 100 levels of tricky puzzles and quests that will challenge your logic, creativity, and intuition. The game also has an engaging and immersive story with Alyx and other characters from the Brain Test franchise. You can shape the story with your choices and actions, as well as discover secrets and mysteries about Alyx's past and future.

-

brain test 3 tricky quests unlimited diamond
-brain test 3 tricky quests puzzle game mod apk
-brain test 3 tricky quests mod apk download
-brain test 3 tricky quests hack version
-brain test 3 tricky quests mod apk latest
-brain test 3 tricky quests free hints
-brain test 3 tricky quests mod apk android
-brain test 3 tricky quests cheats and solutions
-brain test 3 tricky quests mod apk modyolo
-brain test 3 tricky quests premium unlocked
-brain test 3 tricky quests mod apk revdl
-brain test 3 tricky quests full version
-brain test 3 tricky quests mod apk rexdl
-brain test 3 tricky quests no ads
-brain test 3 tricky quests mod apk happymod
-brain test 3 tricky quests offline mode
-brain test 3 tricky quests mod apk unlimited money
-brain test 3 tricky quests walkthrough guide
-brain test 3 tricky quests mod apk apkpure
-brain test 3 tricky quests all levels unlocked
-brain test 3 tricky quests mod apk an1
-brain test 3 tricky quests online play
-brain test 3 tricky quests mod apk platinmods
-brain test 3 tricky quests tips and tricks
-brain test 3 tricky quests mod apk ios
-brain test 3 tricky quests pro apk
-brain test 3 tricky quests mod apk android 1
-brain test 3 tricky quests best strategy
-brain test 3 tricky quests mod apk obb
-brain test 3 tricky quests mega mod
-brain test 3 tricky quests mod apk uptodown
-brain test 3 tricky quests review and rating
-brain test 3 tricky quests mod apk mob.org
-brain test 3 tricky quests fun and challenging
-brain test 3 tricky quests mod apk lenov.ru
-brain test 3 tricky quests original version
-brain test 3 tricky quests mod apk blackmod
-brain test 3 tricky quests gameplay video
-brain test 3 tricky quests mod apk andropalace
-brain test 3 tricky quests new update

-

If you want to enjoy some extra features and benefits, you should download Brain Test 3 Mod APK. This is a modified version of the game that gives you unlimited resources, no ads, and all levels unlocked. You can download and install Brain Test 3 Mod APK easily and safely by following the steps in this article.

-

Brain Test 3 is a game that will not only entertain you, but also benefit you in many ways. Playing Brain Test 3 can help you train your brain and improve your cognitive skills, have fun and laugh at the humorous situations and dialogues, and experience an engaging and immersive story with Alyx and other characters. You can also play better by following some tips and tricks that we shared in this article.

-

If you are looking for a game that will test your brain power, make you laugh, and keep you entertained, then you should try Brain Test 3: Tricky Quests. You can download it from Google Play Store or App Store for free, or download Brain Test 3 Mod APK from this link for some extra features and benefits. Have fun playing Brain Test 3!

-

FAQs

-

Here are some common questions and answers about Brain Test 3 Mod APK:

-
    -
  1. Q: Is Brain Test 3 Mod APK safe to download and install?
  2. -
  3. A: Yes, Brain Test 3 Mod APK is safe to download and install. The mod APK file is scanned for viruses and malware before being uploaded to the source. However, you should always download it from a trusted source like this link to avoid any risks.
  4. -
  5. Q: Do I need to root or jailbreak my device to use Brain Test 3 Mod APK?
  6. -
  7. A: No, you do not need to root or jailbreak your device to use Brain Test 3 Mod APK. The mod APK file works on any Android or iOS device without any modifications.
  8. -
  9. Q: Will I get banned from playing Brain Test 3 if I use Brain Test 3 Mod APK?
  10. -
  11. A: No, you will not get banned from playing Brain Test 3 if you use Brain Test 3 Mod APK. The mod APK file does not interfere with the game's servers or data. However, you should always use it at your own risk and discretion.
  12. -
  13. Q: How can I update Brain Test 3 Mod APK?
  14. -
  15. A: You can update Brain Test 3 Mod APK by downloading the latest version of the mod APK file from the same source as before. You can also check this article for any updates or news about Brain Test 3 Mod APK.
  16. -
  17. Q: How can I contact the developer of Brain Test 3 Mod APK?
  18. -
  19. A: You can contact the developer of Brain Test 3 Mod APK by visiting their website or sending them an email at unico@unicostudio.co.
  20. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Dark Riddle 1.0 APK Solve Puzzles and Escape from a Suspicious Neighbor.md b/spaces/1phancelerku/anime-remove-background/Dark Riddle 1.0 APK Solve Puzzles and Escape from a Suspicious Neighbor.md deleted file mode 100644 index 965d2a9264fcc43fcf7b547722e691cbb2b51519..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Dark Riddle 1.0 APK Solve Puzzles and Escape from a Suspicious Neighbor.md +++ /dev/null @@ -1,138 +0,0 @@ -
-

Dark Riddle 1.0 APK: A Stealth Puzzle Game with a Mysterious Neighbor

-

If you are a fan of stealth puzzle games, you might have heard of Dark Riddle, a game that is similar to the popular Hello Neighbor. In this game, you have to sneak into your neighbor's house and find out what he is hiding in his basement. But be careful, he is not as friendly as he seems, and he will try to stop you at all costs.

-

dark riddle 1.0 apk


Download File ::: https://jinyurl.com/2uNQrt



-

In this article, we will tell you everything you need to know about Dark Riddle 1.0 APK, the latest version of the game that you can download and install on your Android device. We will explain what is Dark Riddle, how to download and install it, and how to play it. Let's get started!

-

What is Dark Riddle?

-

Dark Riddle is a stealth puzzle game developed by PAGA GROUP, a studio that specializes in creating games with immersive stories and realistic graphics. The game was released in 2020 and has received positive reviews from players and critics alike.

-

The premise of the game

-

The game begins in a house you just moved into, and with a suspicious neighbor across the street. You notice that he has a lot of security cameras, locks, and traps around his house, and that he acts very strangely. You decide to investigate what he is up to, and discover that he has a secret basement where he keeps something mysterious.

-

Your goal is to find out what he is hiding, and why he is so obsessed with it. But be careful, he will not let you enter his house easily, and he will chase you if he sees you. You have to use your stealth skills, your logic, and your creativity to avoid him and solve the puzzles that block your way.

-

dark riddle classic game download
-dark riddle retro version apk
-dark riddle neighbor's basement
-dark riddle classic android game
-dark riddle old version free download
-dark riddle classic apkcombo
-dark riddle story mode apk
-dark riddle classic game online
-dark riddle mysterious neighbor apk
-dark riddle classic mod apk
-dark riddle classic game walkthrough
-dark riddle retro version download
-dark riddle neighbor's secrets apk
-dark riddle classic android download
-dark riddle old version apk
-dark riddle classic apkpure
-dark riddle story mode download
-dark riddle classic game play
-dark riddle mysterious neighbor download
-dark riddle classic hack apk
-dark riddle classic game guide
-dark riddle retro version online
-dark riddle neighbor's mystery apk
-dark riddle classic android free
-dark riddle old version mod apk
-dark riddle classic apk mirror
-dark riddle story mode online
-dark riddle classic game review
-dark riddle mysterious neighbor online
-dark riddle classic unlimited money apk
-dark riddle classic game tips
-dark riddle retro version free download
-dark riddle neighbor's adventure apk
-dark riddle classic android app
-dark riddle old version hack apk
-dark riddle classic apk mod menu
-dark riddle story mode free download
-dark riddle classic game cheats
-dark riddle mysterious neighbor free download
-dark riddle classic premium apk

-

The gameplay and features

-

Dark Riddle is a game that combines elements of stealth, puzzle, adventure, and horror. You have to explore your neighbor's house, find clues, collect items, use tools, and interact with objects to progress in the game. You can also distract your neighbor by making noises, throwing objects, or setting traps.

-

The game has many features that make it fun and challenging, such as:

- -

The graphics and sound

-

Dark Riddle has impressive graphics that create a realistic and immersive atmosphere. The game uses Unreal Engine 4, which allows for high-quality textures, lighting, shadows, and animations. The game also has a lot of details and objects that make the environment rich and interactive.

-

The sound design of the game is also very well done, as it enhances the mood and tension of the game. The game has a creepy soundtrack that matches the theme of the game, as well as realistic sound effects that make you feel like you are in the game. The game also has voice acting for the characters, which adds personality and emotion to them.

-

How to download and install Dark Riddle 1.0 APK?

-

If you want to play Dark Riddle on your Android device, you can download and install the APK file of the game, which is a modified version that has some advantages over the original version. Here is how you can do it.

-

The requirements for the APK file

-

Before you download and install the APK file, you need to make sure that your device meets the following requirements:

- -

The steps to download and install the APK file

-

Once you have checked the requirements, you can follow these steps to download and install the APK file:

-
    -
  1. Go to this link and click on the Download button to download the APK file.
  2. -
  3. Locate the downloaded file in your device's file manager and tap on it to start the installation process.
  4. -
  5. Follow the instructions on the screen and wait for the installation to finish.
  6. -
  7. Launch the game from your app drawer and enjoy!
  8. -
-

The benefits of using the APK file

-

By using the APK file, you can enjoy some benefits that are not available in the original version of the game, such as:

- -

How to play Dark Riddle 1.0 APK?

-

Now that you have downloaded and installed Dark Riddle 1.0 APK, you might be wondering how to play it. Here are some tips and tricks that will help you master the game.

-

The controls and interface

-

The game has a simple and intuitive control system that lets you move around, interact with objects, and use items. You can use the virtual joystick on the left side of the screen to move your character, and swipe on the right side of the screen to look around. You can also tap on the icons on the right side of the screen to perform actions such as jumping, crouching, running, or using items.

-

The game also has a user-friendly interface that shows you important information such as your health, inventory, objectives, and map. You can access these by tapping on the icons on the top of the screen. You can also pause the game by tapping on the menu icon on the top right corner of the screen.

-

The tips and tricks

-

The game is not easy, as your neighbor is smart and unpredictable. You have to use your wits and skills to outsmart him and solve the puzzles. Here are some tips and tricks that will help you:

- -

The challenges and rewards

-

The game has many challenges and rewards that make it more fun and rewarding. You can complete various objectives such as finding keys, unlocking doors, entering rooms, or discovering secrets. You can also collect coins and gems that you can use to buy items or tools in the shop. You can also unlock achievements that show your progress and skills in the game.

-

Conclusion

-

Dark Riddle 1.0 APK is a stealth puzzle game that will keep you entertained and challenged for hours. You have to sneak into your neighbor's house and find out what he is hiding in his basement, while avoiding his traps and attacks. You have to use your stealth skills, your logic, and your creativity to solve the puzzles and uncover the secrets. You can also enjoy the sandbox mode, the multiplayer mode, the various endings, and the secrets and Easter eggs that the game has to offer.

-

If you want to play Dark Riddle on your Android device, you can download and install the APK file of the game, which has some benefits over the original version. You can enjoy the game without ads or in-app purchases, with all levels and modes unlocked, with unlimited coins and gems, and with exclusive features and updates.

-

Dark Riddle 1.0 APK is a game that will test your wits and skills, and will give you a thrilling and immersive experience. If you are a fan of stealth puzzle games, you should definitely try it out. You will not regret it!

-

FAQs

-

Here are some frequently asked questions about Dark Riddle 1.0 APK:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Dark Riddle 1.0 APK safe to download and install?Yes, Dark Riddle 1.0 APK is safe to download and install, as long as you use a trusted source such as this link . The APK file has been scanned for viruses and malware, and has no harmful effects on your device.
Is Dark Riddle 1.0 APK compatible with my device?Dark Riddle 1.0 APK is compatible with most Android devices that have Android 4.4 or higher, 2 GB of RAM, 500 MB of free storage space, and a stable internet connection. You can check the compatibility of your device by going to Settings > About Phone > Software Information.
How can I update Dark Riddle 1.0 APK?You can update Dark Riddle 1.0 APK by downloading and installing the latest version of the APK file from this link . You do not need to uninstall the previous version of the game, as the new version will overwrite it.
How can I contact the developers of Dark Riddle?You can contact the developers of Dark Riddle by sending them an email at support@pagagroup.com , or by visiting their website at https://pagagroup.com/ . You can also follow them on Facebook , Twitter , or Instagram for news and updates about the game.
How can I share my feedback or suggestions about Dark Riddle?You can share your feedback or suggestions about Dark Riddle by leaving a comment or rating on the Google Play Store , or by sending an email to feedback@pagagroup.com . The developers appreciate your feedback and suggestions, as they help them improve the game and make it more enjoyable for you.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download FIFA 22 Mobile and Experience the FIFA World Cup 2022 on Your Phone.md b/spaces/1phancelerku/anime-remove-background/Download FIFA 22 Mobile and Experience the FIFA World Cup 2022 on Your Phone.md deleted file mode 100644 index 45c5c677130ca65efa7c23c42abacc72ea36137c..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download FIFA 22 Mobile and Experience the FIFA World Cup 2022 on Your Phone.md +++ /dev/null @@ -1,115 +0,0 @@ - -

FIFA 22 Mobile: How to Download and Play the Ultimate Soccer Game on Android

-

If you are a soccer fan, you probably have heard of FIFA 22, the latest installment of the popular soccer simulation game series by EA Sports. But did you know that you can also play FIFA 22 Mobile on your Android device? Yes, you read that right. You can enjoy the ultimate soccer game experience on your smartphone or tablet, with tons of features, modes, players, teams, stadiums, and more.

-

In this article, we will tell you everything you need to know about FIFA 22 Mobile, including what it is, what are its main features and modes, how to download it on your Android device, and some tips and tricks for playing it. So, without further ado, let's get started.

-

fifa 22 mobile download android


DOWNLOAD »»» https://jinyurl.com/2uNPNl



-

FIFA 22 Mobile Features and Modes

-

FIFA 22 Mobile is not just a scaled-down version of FIFA 22 for consoles or PC. It is a full-fledged soccer game that has its own unique features and modes that are designed specifically for mobile devices. Here are some of the most notable ones:

-

HyperMotion Technology

-

One of the biggest innovations in FIFA 22 Mobile is the HyperMotion Technology, which is a new system that uses machine learning to create more realistic and fluid gameplay and animations. HyperMotion Technology analyzes over 8.7 million frames of real-life soccer data and applies them to the game, resulting in more natural movements, reactions, and emotions of the players on the pitch.

-

HyperMotion Technology also works on mobile devices, thanks to the optimization and compression techniques used by EA Sports. However, you will need a compatible device that meets the minimum system requirements to enjoy this feature. According to EA Sports, you will need at least 4 GB of RAM, Android 8.0 or higher, and a device with a 64-bit processor. You can check if your device is compatible by going to the Google Play Store page of FIFA 22 Mobile and looking for the "HyperMotion Compatible" label.

-

FIFA World Cup 2022 Mode

-

Another exciting feature in FIFA 22 Mobile is the FIFA World Cup 2022 Mode, which lets you experience the thrill and excitement of the biggest soccer tournament in the world. FIFA World Cup 2022 Mode allows you to play with any of the 32 qualified nations or 15 non-qualified nations that are available in the game, and compete for the coveted trophy in Qatar.

-

To access FIFA World Cup 2022 Mode, you will need to go to the main menu of FIFA 22 Mobile and tap on the "World Cup" icon. You will then be able to choose your preferred nation and start your journey from the group stage to the knockout stage. You will also be able to earn rewards and challenges along the way, such as coins, packs, players, kits, badges, and more.

-

Manager Mode

-

If you prefer to take a more strategic approach to soccer, you might want to try Manager Mode, which is a mode that lets you manage your own dream team and adjust your tactics in real time or auto-play. Manager Mode gives you full control over your squad, transfers, formations, tactics, training, scouting, and more. You can also choose from different difficulty levels and leagues to suit your preferences and skills.

-

To start Manager Mode, you will need to go to the main menu of FIFA 22 Mobile and tap on the "Manager" icon. You will then be able to create your manager profile, choose your team name, logo, kit, stadium, and budget. You will also be able to customize your team's appearance, attributes, skills, and chemistry. You can then start playing matches against other teams or simulate them if you want to save time.

-

fifa 22 mobile apk download for android
-how to download fifa 22 mobile on android
-fifa 22 mobile android release date
-fifa 22 mobile android gameplay
-fifa 22 mobile android offline
-fifa 22 mobile android requirements
-fifa 22 mobile android mod apk
-fifa 22 mobile android beta
-fifa 22 mobile android hack
-fifa 22 mobile android size
-fifa 22 mobile android free download
-fifa 22 mobile android review
-fifa 22 mobile android tips and tricks
-fifa 22 mobile android best players
-fifa 22 mobile android cheats
-fifa 22 mobile android update
-fifa 22 mobile android graphics settings
-fifa 22 mobile android controller support
-fifa 22 mobile android online mode
-fifa 22 mobile android world cup mode
-fifa 22 mobile android vs ios
-fifa 22 mobile android coins generator
-fifa 22 mobile android ultimate team mode
-fifa 22 mobile android champions league mode
-fifa 22 mobile android icons and heroes mode
-fifa 22 mobile android manager mode
-fifa 22 mobile android career mode
-fifa 22 mobile android skills tutorial
-fifa 22 mobile android squad builder
-fifa 22 mobile android transfer market
-fifa 22 mobile android live events
-fifa 22 mobile android seasons mode
-fifa 22 mobile android tournaments mode
-fifa 22 mobile android vs attack mode
-fifa 22 mobile android head to head mode
-fifa 22 mobile android customisation options
-fifa 22 mobile android device compatibility list
-fifa 22 mobile android error fix guide
-fifa 22 mobile android patch notes
-fifa 22 mobile android news and updates

-

UEFA Champions League, Europa League, and Europa Conference League

-

If you want to compete against the best teams from club football's most prestigious tournaments, you can play in the UEFA Champions League, Europa League, or Europa Conference League. These are modes that let you participate in these competitions with your Ultimate Team or Manager Mode team, and try to win the coveted trophies and glory.

-

To play in these modes, you will need to go to the main menu of FIFA 22 Mobile and tap on the "UEFA" icon. You will then be able to choose which competition you want to enter and start playing matches against other teams from different groups and stages. You will also be able to earn special players and items from these modes, such as Team of the Group Stage (TOTGS), Team of the Knockout Stage (TOTKS), Man of the Match (MOTM), etc.

-

Icons and Heroes

-

If you want to build your Ultimate Team with over 100 soccer legends from different leagues and eras, you can get Icons and Heroes. These are special players that have high ratings and unique attributes that reflect their skills and achievements in real life. Icons are players that have retired from soccer, while Heroes are players that are still active but have made a significant impact on their clubs or leagues.

-

To get Icons and Heroes, you will need to go to the store or the transfer market of FIFA 22 Mobile and look for packs or cards that contain them. You can also earn them from completing certain objectives or challenges in the game. You can then use Icons and Heroes to boost your team's chemistry and performance by matching them with players from the same nation, league, or club. You can also upgrade your Icons and Heroes by completing their special challenges or objectives in the game.

-

How to Download FIFA 22 Mobile on Android Devices

-

Now that you know what FIFA 22 Mobile has to offer, you might be wondering how to download it on your Android device. Well, don't worry, because we have got you covered. Here are the requirements and compatibility, and the steps to download FIFA 22 Mobile on Android devices.

-

Requirements and Compatibility

-

Before you download FIFA 22 Mobile on your Android device, you need to make sure that your device meets the minimum specs and compatibility for the game. According to EA Sports, these are the minimum requirements for FIFA 22 Mobile on Android devices:

- -

You can check if your device meets these requirements by going to the settings menu of your device and looking for the information about your device model, OS version, RAM, processor, and storage space. You can also check if your device is compatible by going to the Google Play Store page of FIFA 22 Mobile and looking for the "HyperMotion Compatible" label.

-

Steps to Download FIFA 22 Mobile on Android Devices

-

If your device meets the requirements and compatibility for FIFA 22 Mobile, you can follow these steps to download it on your Android device:

-
    -
  1. Go to the Google Play Store app on your device and search for "FIFA 22 Mobile" or use this link: FIFA 22 Mobile - Apps on Google Play
  2. -
  3. Tap on the "Install" button and wait for the game to download and install on your device. The game size is about 1.5 GB, so make sure you have a stable internet connection and enough battery life.
  4. -
  5. Once the game is installed, tap on the "Open" button or find the game icon on your home screen or app drawer and tap on it to launch the game.
  6. -
  7. The game will ask you to sign in with your EA account or create a new one if you don't have one. You can also sign in with your Facebook or Google account if you prefer. Signing in with your EA account will allow you to sync your progress and data across different devices and platforms.
  8. -
  9. The game will then ask you to choose your preferred language, region, and difficulty level. You can also customize your controls, graphics, sound, and other settings in the game menu.
  10. -
  11. The game will then take you to the main menu, where you can access different features and modes of FIFA 22 Mobile. You can also watch tutorials and tips videos to learn more about the game.
  12. -
-

Tips and Tricks for Playing FIFA 22 Mobile on Android Devices

-

Now that you have downloaded FIFA 22 Mobile on your Android device, you might want some tips and tricks for playing it better and having more fun. Here are some of them:

-

Use the Advanced Passing System to Create More Opportunities

-

One of the most important skills in soccer is passing, as it allows you to create more opportunities for scoring or defending. In FIFA 22 Mobile, you can use the Advanced Passing System, which is a new feature that lets you use different types of passes, such as through balls, lobbed passes, driven passes, etc., depending on the situation and your preference.

-

To use the Advanced Passing System, you need to tap and hold on the pass button on the right side of the screen, and then swipe in any direction to choose the type of pass you want to make. The longer you hold and swipe, the more power and distance you will give to your pass. You can also use gestures such as double-tap or flick to perform quick passes or one-touch passes.

-

The Advanced Passing System can help you avoid interceptions and turnovers by passing smartly and accurately. You can also use it to create more chances for scoring by finding gaps in the defense or sending long balls to your strikers.

-

Master the New Skill Moves to Outsmart Your Opponents

-

Another way to improve your gameplay in FIFA 22 Mobile is to master the new skill moves, which are special moves that allow you to dribble past defenders, create space, or score goals. FIFA 22 Mobile has over 50 skill moves that you can perform with different players, depending on their skill rating and position.

-

To perform skill moves in FIFA 22 Mobile, you need to swipe on the left side of the screen, where the virtual joystick is located. You can swipe in different directions and combinations to perform different skill moves, such as roulette, rainbow flick, heel to heel, etc. You can also use gestures such as double-tap or flick to perform quick skill moves or feints.

-

Mastering skill moves can help you outsmart your opponents and gain an advantage on the pitch. You can also use skill moves to show off your style and flair, or to humiliate your rivals. You can practice skill moves in training mode or skill games mode, where you can learn how to perform them and when to use them.

-

Use Emote Messages to Communicate with Your Opponents or Teammates

-

One of the fun features in FIFA 22 Mobile is the Emote Messages, which are messages that you can send to your opponents or teammates during a match. Emote Messages allow you to express yourself, taunt your opponents, or celebrate your goals. You can also use Emote Messages to communicate with your teammates, such as asking for a pass, giving instructions, or praising their performance.

-

To use Emote Messages in FIFA 22 Mobile, you need to tap on the emote button on the top right corner of the screen during a match. You will then see a list of Emote Messages that you can choose from, such as "Nice one!", "Sorry!", "Wow!", etc. You can also customize your Emote Messages in the settings menu or buy new ones from the store.

-

Using Emote Messages can make your matches more fun and interactive. You can also use Emote Messages to influence your opponents' emotions and behavior, such as making them angry, nervous, or confident. However, you should be careful not to use Emote Messages in an abusive or offensive way, as this might result in a ban or a report from other players.

-

Conclusion

-

FIFA 22 Mobile is a great soccer game that you can play on your Android device. It has many features and modes that will keep you entertained and challenged for hours. You can also download it easily and play it smoothly on your device, as long as it meets the requirements and compatibility for the game.

-

If you are a soccer fan, you should definitely give FIFA 22 Mobile a try. You will not regret it. You will be able to enjoy the ultimate soccer game experience on your smartphone or tablet, with tons of features, modes, players, teams, stadiums, and more.

-

We hope this article has helped you learn more about FIFA 22 Mobile and how to download and play it on your Android device. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

-

FAQs

-

Here are some of the frequently asked questions about FIFA 22 Mobile:

-

Q: Is FIFA 22 Mobile free to play?

-

A: Yes, FIFA 22 Mobile is free to play on Android devices. However, it does have some optional in-app purchases that can enhance your gameplay or unlock more content.

-

Q: Can I play FIFA 22 Mobile offline?

-

A: No, FIFA 22 Mobile requires an internet connection to play. You will need a stable Wi-Fi or mobile data connection to access all the features and modes of the game.

-

Q: Can I play FIFA 22 Mobile with my friends?

-

A: Yes, FIFA 22 Mobile has a multiplayer mode that allows you to play with your friends or other players online. You can invite your friends to join your team or challenge them to a friendly match. You can also join leagues or tournaments with other players from around the world.

-

Q: How do I update FIFA 22 Mobile?

-

A: FIFA 22 Mobile updates automatically when you launch the game, as long as you have an internet connection and enough storage space on your device. You can also check for updates manually by going to the Google Play Store app on your device and looking for FIFA 22 Mobile.

-

Q: How do I contact EA Sports for support or feedback?

-

A: If you have any issues or problems with FIFA 22 Mobile, you can contact EA Sports for support by going to the settings menu of the game and tapping on the "Help" icon. You will then be able to access the EA Help Center, where you can find answers to common questions, report a bug, request a refund, or contact an EA advisor. You can also provide feedback or suggestions for FIFA 22 Mobile by going to the settings menu of the game and tapping on the "Feedback" icon.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/preconfig/preconfig_scheduling_euler_ancestral_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/preconfig/preconfig_scheduling_euler_ancestral_discrete.py deleted file mode 100644 index ac5aef7bcfcab208a9d4cad699e41b895a29ba64..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/preconfig/preconfig_scheduling_euler_ancestral_discrete.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import paddle - -from ...configuration_utils import ConfigMixin, register_to_config -from ...utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput, logging -from ..scheduling_utils import SchedulerMixin - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerAncestralDiscrete -class PreconfigEulerAncestralDiscreteSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: paddle.Tensor - pred_original_sample: Optional[paddle.Tensor] = None - - -class PreconfigEulerAncestralDiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson: - https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - preconfig: bool = True, - ): - if trained_betas is not None: - self.betas = paddle.to_tensor(trained_betas, dtype="float32") - elif beta_schedule == "linear": - self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32") - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2 - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = paddle.cumprod(self.alphas, 0) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32) - self.sigmas = paddle.to_tensor(sigmas) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = self.sigmas.max() - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy() - self.timesteps = paddle.to_tensor(timesteps, dtype="float32") - self.is_scale_input_called = False - self.preconfig = preconfig - - def scale_model_input( - self, sample: paddle.Tensor, timestep: Union[float, paddle.Tensor], **kwargs - ) -> paddle.Tensor: - """ - Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. - - Args: - sample (`paddle.Tensor`): input sample - timestep (`float` or `paddle.Tensor`): the current timestep in the diffusion chain - - Returns: - `paddle.Tensor`: scaled input sample - """ - self.is_scale_input_called = True - if kwargs.get("step_index") is not None: - step_index = kwargs["step_index"] - else: - step_index = (self.timesteps == timestep).nonzero().item() - - if not self.preconfig: - sigma = self.sigmas[step_index] - sample = sample / ((sigma**2 + 1) ** 0.5) - return sample - else: - return sample * self.latent_scales[step_index] - - def set_timesteps(self, num_inference_steps: int): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - self.num_inference_steps = num_inference_steps - - timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy() - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - self.sigmas = paddle.to_tensor(sigmas) - self.timesteps = paddle.to_tensor(timesteps, dtype="float32") - if self.preconfig: - self.sigma_up = [] - self.sigma_down = [] - for step_index_i in range(len(self.timesteps)): - sigma_from = self.sigmas[step_index_i] - sigma_to = self.sigmas[step_index_i + 1] - sigma_up = (sigma_to**2 * (sigma_from**2 - sigma_to**2) / sigma_from**2) ** 0.5 - sigma_down = (sigma_to**2 - sigma_up**2) ** 0.5 - self.sigma_up.append(sigma_up) - self.sigma_down.append(sigma_down) - self.latent_scales = 1 / ((self.sigmas**2 + 1) ** 0.5) - - def step( - self, - model_output: paddle.Tensor, - timestep: Union[float, paddle.Tensor], - sample: paddle.Tensor, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - return_dict: bool = True, - **kwargs - ) -> Union[PreconfigEulerAncestralDiscreteSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - timestep (`float`): current timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - generator (`paddle.Generator`, optional): Random number generator. - return_dict (`bool`): option for returning tuple rather than PreconfigEulerAncestralDiscreteSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.PreconfigEulerAncestralDiscreteSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.PreconfigEulerAncestralDiscreteSchedulerOutput`] if `return_dict` is True, otherwise - a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if not self.is_scale_input_called: - logger.warning( - "The `scale_model_input` function should be called before `step` to ensure correct denoising. " - "See `StableDiffusionPipeline` for a usage example." - ) - if kwargs.get("return_pred_original_sample") is not None: - return_pred_original_sample = kwargs["return_pred_original_sample"] - else: - return_pred_original_sample = True - if kwargs.get("step_index") is not None: - step_index = kwargs["step_index"] - else: - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - if self.config.prediction_type == "epsilon" and not return_pred_original_sample: - derivative = model_output - pred_original_sample = None - else: - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - pred_original_sample = sample - sigma * model_output - elif self.config.prediction_type == "v_prediction": - # * c_out + input * c_skip - pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1)) - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - derivative = (sample - pred_original_sample) / sigma - if not self.preconfig: - sigma_from = self.sigmas[step_index] - sigma_to = self.sigmas[step_index + 1] - sigma_up = (sigma_to**2 * (sigma_from**2 - sigma_to**2) / sigma_from**2) ** 0.5 - sigma_down = (sigma_to**2 - sigma_up**2) ** 0.5 - else: - sigma_up = self.sigma_up[step_index] - sigma_down = self.sigma_down[step_index] - # 2. Convert to an ODE derivative - dt = sigma_down - sigma - prev_sample = sample + derivative * dt - noise = paddle.randn(model_output.shape, dtype=model_output.dtype, generator=generator) - prev_sample = prev_sample + noise * sigma_up - if not return_dict: - if not return_pred_original_sample: - return (prev_sample,) - else: - return (prev_sample, pred_original_sample) - - return PreconfigEulerAncestralDiscreteSchedulerOutput( - prev_sample=prev_sample, pred_original_sample=pred_original_sample - ) - - def add_noise( - self, - original_samples: paddle.Tensor, - noise: paddle.Tensor, - timesteps: paddle.Tensor, - ) -> paddle.Tensor: - # Make sure sigmas and timesteps have the same dtype as original_samples - self.sigmas = self.sigmas.cast(original_samples.dtype) - - schedule_timesteps = self.timesteps - step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] - - sigma = self.sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/1toTree/lora_test/ppdiffusers/training_utils.py b/spaces/1toTree/lora_test/ppdiffusers/training_utils.py deleted file mode 100644 index 850b3d2f386becd26e3b4077d1f49d42c4192ada..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/training_utils.py +++ /dev/null @@ -1,152 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import contextlib -import copy -import os -import random - -import numpy as np -import paddle - -from .utils import logging - -logger = logging.get_logger(__name__) - - -def enable_full_determinism(seed: int): - """ - Helper function for reproducible behavior during distributed training. - """ - # set seed first - set_seed(seed) - - # Enable Paddle deterministic mode. This potentially requires either the environment - # variable 'CUDA_LAUNCH_BLOCKING' or 'CUBLAS_WORKSPACE_CONFIG' to be set, - # depending on the CUDA version, so we set them both here - os.environ["CUDA_LAUNCH_BLOCKING"] = "1" - os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" - os.environ["FLAGS_cudnn_deterministic"] = "True" - os.environ["FLAGS_benchmark"] = "True" - - -def set_seed(seed: int = None): - """ - Args: - Helper function for reproducible behavior to set the seed in `random`, `numpy`, `paddle`. - seed (`int`): The seed to set. - """ - if seed is not None: - random.seed(seed) - np.random.seed(seed) - paddle.seed(seed) - - -class EMAModel: - """ - Exponential Moving Average of models weights - """ - - def __init__(self, model, update_after_step=0, inv_gamma=1.0, power=2 / 3, min_value=0.0, max_value=0.9999): - """ - @crowsonkb's notes on EMA Warmup: - If gamma=1 and power=1, implements a simple average. gamma=1, power=2/3 are good values for models you plan - to train for a million or more steps (reaches decay factor 0.999 at 31.6K steps, 0.9999 at 1M steps), - gamma=1, power=3/4 for models you plan to train for less (reaches decay factor 0.999 at 10K steps, 0.9999 - at 215.4k steps). - Args: - inv_gamma (float): Inverse multiplicative factor of EMA warmup. Default: 1. - power (float): Exponential factor of EMA warmup. Default: 2/3. - min_value (float): The minimum EMA decay rate. Default: 0. - """ - - self.averaged_model = copy.deepcopy(model) - self.averaged_model.eval() - for params in self.averaged_model.parameters(): - params.stop_gradient = True - - self.update_after_step = update_after_step - self.inv_gamma = inv_gamma - self.power = power - self.min_value = min_value - self.max_value = max_value - - self.decay = 0.0 - self.optimization_step = 0 - - def get_decay(self, optimization_step): - """ - Compute the decay factor for the exponential moving average. - """ - step = max(0, optimization_step - self.update_after_step - 1) - value = 1 - (1 + step / self.inv_gamma) ** -self.power - - if step <= 0: - return 0.0 - - return max(self.min_value, min(value, self.max_value)) - - @paddle.no_grad() - def step(self, new_model): - ema_state_dict = {} - ema_params = self.averaged_model.state_dict() - - self.decay = self.get_decay(self.optimization_step) - - for key, param in new_model.named_parameters(): - if isinstance(param, dict): - continue - try: - ema_param = ema_params[key] - except KeyError: - ema_param = param.cast("float32").clone() if param.ndim == 1 else copy.deepcopy(param) - ema_params[key] = ema_param - - if param.stop_gradient: - ema_params[key].copy_(param.cast(ema_param.dtype), True) - ema_param = ema_params[key] - else: - ema_param.scale_(self.decay) - ema_param.add_(param.cast(ema_param.dtype) * (1 - self.decay)) - - ema_state_dict[key] = ema_param - - for key, param in new_model.named_buffers(): - ema_state_dict[key] = param - - self.averaged_model.load_dict(ema_state_dict) - self.optimization_step += 1 - - -@contextlib.contextmanager -def main_process_first(desc="work"): - if paddle.distributed.get_world_size() > 1: - rank = paddle.distributed.get_rank() - is_main_process = rank == 0 - main_process_desc = "main local process" - - try: - if not is_main_process: - # tell all replicas to wait - logger.debug(f"{rank}: waiting for the {main_process_desc} to perform {desc}") - paddle.distributed.barrier() - yield - finally: - if is_main_process: - # the wait is over - logger.debug(f"{rank}: {main_process_desc} completed {desc}, releasing all replicas") - paddle.distributed.barrier() - else: - yield diff --git a/spaces/2ndelement/voicevox/ui_template/ui.html b/spaces/2ndelement/voicevox/ui_template/ui.html deleted file mode 100644 index a37b9e1040cf1952564f4507ee55ade0384c90a7..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/ui_template/ui.html +++ /dev/null @@ -1,120 +0,0 @@ - - - - - VOICEVOX Engine 設定 - - - - - - - -
-
- - -
- - -
-

- allまたはlocalappsを指定。allはすべてを許可します。 -

-

- localappsはオリジン間リソース共有ポリシーを、app://.とlocalhost関連に限定します。 -

-

- その他のオリジンはallow_originオプションで追加できます。デフォルトはlocalapps。 -

-
-
- -
- - -
- 許可するオリジンを指定します。複数指定する場合は、直後にスペースで区切って追加できます。 -
-
- - - - -
-
- - diff --git a/spaces/801artistry/RVC801/infer/modules/train/extract/extract_f0_rmvpe_dml.py b/spaces/801artistry/RVC801/infer/modules/train/extract/extract_f0_rmvpe_dml.py deleted file mode 100644 index 6abb1898550664ca600cebbb6d37ba0de8a3d312..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/modules/train/extract/extract_f0_rmvpe_dml.py +++ /dev/null @@ -1,139 +0,0 @@ -import os -import sys -import traceback - -import parselmouth - -now_dir = os.getcwd() -sys.path.append(now_dir) -import logging - -import numpy as np -import pyworld - -from infer.lib.audio import load_audio - -logging.getLogger("numba").setLevel(logging.WARNING) - -exp_dir = sys.argv[1] -import torch_directml - -device = torch_directml.device(torch_directml.default_device()) -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def compute_f0(self, path, f0_method): - x = load_audio(path, self.fs) - # p_len = x.shape[0] // self.hop - if f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - print("Loading rmvpe model") - self.model_rmvpe = RMVPE( - "assets/rmvpe/rmvpe.pt", is_half=False, device=device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method): - if len(paths) == 0: - printt("no-f0-todo") - else: - printt("todo-f0-%s" % len(paths)) - n = max(len(paths) // 5, 1) # 每个进程最多打印5条 - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if idx % n == 0: - printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path)) - if ( - os.path.exists(opt_path1 + ".npy") == True - and os.path.exists(opt_path2 + ".npy") == True - ): - continue - featur_pit = self.compute_f0(inp_path, f0_method) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - except: - printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc())) - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - try: - featureInput.go(paths, "rmvpe") - except: - printt("f0_all_fail-%s" % (traceback.format_exc())) - # ps = [] - # for i in range(n_p): - # p = Process( - # target=featureInput.go, - # args=( - # paths[i::n_p], - # f0method, - # ), - # ) - # ps.append(p) - # p.start() - # for i in range(n_p): - # ps[i].join() diff --git a/spaces/AB-TW/team-ai/memories.py b/spaces/AB-TW/team-ai/memories.py deleted file mode 100644 index a01f35dd7a8ad236b442755b368a4f4085b75765..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/memories.py +++ /dev/null @@ -1,61 +0,0 @@ -from typing import Any, Dict, List, Optional -from langchain.memory.chat_memory import BaseChatMemory -from langchain.schema import BaseMessage, HumanMessage, AIMessage, SystemMessage, ChatMessage - - -def get_buffer_string( - messages: List[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI" -) -> str: - """Get buffer string of messages.""" - string_messages = [] - for m in messages: - if isinstance(m, HumanMessage): - print("HumanMessage: " + m.content) - role = human_prefix + ": " - elif isinstance(m, AIMessage): - print("AIMessage" + m.content) - role = "" - elif isinstance(m, SystemMessage): - print("SystemMessage") - role = "System: " - elif isinstance(m, ChatMessage): - print("ChatMessage") - role = m.role + ": " - else: - raise ValueError(f"Got unsupported message type: {m}") - - string_messages.append(f"{role + m.content}") - - return "\n".join(string_messages) - - -class HumenFeedbackBufferMemory(BaseChatMemory): - """Buffer for storing conversation memory.""" - - human_prefix: str = "Human" - ai_prefix: str = "AI" - memory_key: str = "history" #: :meta private: - - @property - def buffer(self) -> Any: - """String buffer of memory.""" - if self.return_messages: - return self.chat_memory.messages - else: - return get_buffer_string( - self.chat_memory.messages, - human_prefix=self.human_prefix, - ai_prefix=self.ai_prefix, - ) - - @property - def memory_variables(self) -> List[str]: - """Will always return list of memory variables. - - :meta private: - """ - return [self.memory_key] - - def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]: - """Return history buffer.""" - return {self.memory_key: self.buffer} diff --git a/spaces/AIZ2H/04-Gradio-SOTA-Seq2Seq-AutoQA/qasrl_model_pipeline.py b/spaces/AIZ2H/04-Gradio-SOTA-Seq2Seq-AutoQA/qasrl_model_pipeline.py deleted file mode 100644 index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000 --- a/spaces/AIZ2H/04-Gradio-SOTA-Seq2Seq-AutoQA/qasrl_model_pipeline.py +++ /dev/null @@ -1,183 +0,0 @@ -from typing import Optional -import json -from argparse import Namespace -from pathlib import Path -from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer - -def get_markers_for_model(is_t5_model: bool) -> Namespace: - special_tokens_constants = Namespace() - if is_t5_model: - # T5 model have 100 special tokens by default - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - - else: - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - return special_tokens_constants - -def load_trained_model(name_or_path): - import huggingface_hub as HFhub - tokenizer = AutoTokenizer.from_pretrained(name_or_path) - model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path) - # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory - kwargs_filename = None - if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files - kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json") - elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists(): - kwargs_filename = Path(name_or_path) / "experiment_kwargs.json" - - if kwargs_filename: - preprocessing_kwargs = json.load(open(kwargs_filename)) - # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing - model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs) - model.config.update(preprocessing_kwargs) - return model, tokenizer - - -class QASRL_Pipeline(Text2TextGenerationPipeline): - def __init__(self, model_repo: str, **kwargs): - model, tokenizer = load_trained_model(model_repo) - super().__init__(model, tokenizer, framework="pt") - self.is_t5_model = "t5" in model.config.model_type - self.special_tokens = get_markers_for_model(self.is_t5_model) - self.data_args = model.config.preprocessing_kwargs - # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs` - if "predicate_marker_type" not in vars(self.data_args): - self.data_args.predicate_marker_type = "generic" - if "use_bilateral_predicate_marker" not in vars(self.data_args): - self.data_args.use_bilateral_predicate_marker = True - if "append_verb_form" not in vars(self.data_args): - self.data_args.append_verb_form = True - self._update_config(**kwargs) - - def _update_config(self, **kwargs): - " Update self.model.config with initialization parameters and necessary defaults. " - # set default values that will always override model.config, but can overriden by __init__ kwargs - kwargs["max_length"] = kwargs.get("max_length", 80) - # override model.config with kwargs - for k,v in kwargs.items(): - self.model.config.__dict__[k] = v - - def _sanitize_parameters(self, **kwargs): - preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {} - if "predicate_marker" in kwargs: - preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"] - if "predicate_type" in kwargs: - preprocess_kwargs["predicate_type"] = kwargs["predicate_type"] - if "verb_form" in kwargs: - preprocess_kwargs["verb_form"] = kwargs["verb_form"] - return preprocess_kwargs, forward_kwargs, postprocess_kwargs - - def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None): - # Here, inputs is string or list of strings; apply string postprocessing - if isinstance(inputs, str): - processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form) - elif hasattr(inputs, "__iter__"): - processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs] - else: - raise ValueError("inputs must be str or Iterable[str]") - # Now pass to super.preprocess for tokenization - return super().preprocess(processed_inputs) - - def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str: - sent_tokens = seq.split(" ") - assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word" - predicate_idx = sent_tokens.index(predicate_marker) - sent_tokens.remove(predicate_marker) - sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)]) - predicate = sent_tokens[predicate_idx] - sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))]) - - if self.data_args.predicate_marker_type == "generic": - predicate_marker = self.special_tokens.predicate_generic_marker - # In case we want special marker for each predicate type: """ - elif self.data_args.predicate_marker_type == "pred_type": - assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it" - assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'" - predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker , - "nominal": self.special_tokens.predicate_nominalization_marker - }[predicate_type] - - if self.data_args.use_bilateral_predicate_marker: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}" - else: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}" - - # embed also verb_form - if self.data_args.append_verb_form and verb_form is None: - raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)") - elif self.data_args.append_verb_form: - seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} " - else: - seq = f"{seq} " - - # append source prefix (for t5 models) - prefix = self._get_source_prefix(predicate_type) - - return prefix + seq - - def _get_source_prefix(self, predicate_type: Optional[str]): - if not self.is_t5_model or self.data_args.source_prefix is None: - return '' - if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x - return self.data_args.source_prefix - if self.data_args.source_prefix == "": - if predicate_type is None: - raise ValueError("source_prefix is '' but input no `predicate_type`.") - else: - return f"Generate QAs for {predicate_type} QASRL: " - - def _forward(self, *args, **kwargs): - outputs = super()._forward(*args, **kwargs) - return outputs - - - def postprocess(self, model_outputs): - output_seq = self.tokenizer.decode( - model_outputs["output_ids"].squeeze(), - skip_special_tokens=False, - clean_up_tokenization_spaces=False, - ) - output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip() - qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs) - qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs] - return {"generated_text": output_seq, - "QAs": qas} - - def _postrocess_qa(self, seq: str) -> str: - # split question and answers - if self.special_tokens.separator_output_question_answer in seq: - question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2] - else: - print("invalid format: no separator between question and answer found...") - return None - # question, answer = seq, '' # Or: backoff to only question - # skip "_" slots in questions - question = ' '.join(t for t in question.split(' ') if t != '_') - answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)] - return {"question": question, "answers": answers} - - -if __name__ == "__main__": - pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline") - res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal") - res2 = pipe(["The doctor was interested in Luke 's treatment .", - "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10) - res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal") - print(res1) - print(res2) - print(res3) - \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Equing.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Equing.py deleted file mode 100644 index 794274f26a417b41ba487bcd113741c0bc61072e..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Equing.py +++ /dev/null @@ -1,81 +0,0 @@ -from __future__ import annotations - -import json -from abc import ABC, abstractmethod - -import requests - -from ...typing import Any, CreateResult -from ..base_provider import BaseProvider - - -class Equing(BaseProvider): - url: str = 'https://next.eqing.tech/' - working = False - supports_stream = True - supports_gpt_35_turbo = True - supports_gpt_4 = False - - @staticmethod - @abstractmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, **kwargs: Any) -> CreateResult: - - headers = { - 'authority' : 'next.eqing.tech', - 'accept' : 'text/event-stream', - 'accept-language' : 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'cache-control' : 'no-cache', - 'content-type' : 'application/json', - 'origin' : 'https://next.eqing.tech', - 'plugins' : '0', - 'pragma' : 'no-cache', - 'referer' : 'https://next.eqing.tech/', - 'sec-ch-ua' : '"Not/A)Brand";v="99", "Google Chrome";v="115", "Chromium";v="115"', - 'sec-ch-ua-mobile' : '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36', - 'usesearch' : 'false', - 'x-requested-with' : 'XMLHttpRequest' - } - - json_data = { - 'messages' : messages, - 'stream' : stream, - 'model' : model, - 'temperature' : kwargs.get('temperature', 0.5), - 'presence_penalty' : kwargs.get('presence_penalty', 0), - 'frequency_penalty' : kwargs.get('frequency_penalty', 0), - 'top_p' : kwargs.get('top_p', 1), - } - - response = requests.post('https://next.eqing.tech/api/openai/v1/chat/completions', - headers=headers, json=json_data, stream=stream) - - if not stream: - yield response.json()["choices"][0]["message"]["content"] - return - - for line in response.iter_content(chunk_size=1024): - if line: - if b'content' in line: - line_json = json.loads(line.decode('utf-8').split('data: ')[1]) - token = line_json['choices'][0]['delta'].get('content') - if token: - yield token - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/hiddenedit/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/hiddenedit/Factory.d.ts deleted file mode 100644 index b7896df35ba8ba2b4cc772ef05b7a219050d6105..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/hiddenedit/Factory.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import HiddenEdit from './HiddenEdit'; - -export default function ( - textObject: Phaser.GameObjects.GameObject, - config?: HiddenEdit.IConfig -): HiddenEdit; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/GetAddChildConfig.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/GetAddChildConfig.js deleted file mode 100644 index a10843a1c2fb59082791e6f7f4d862d1bddd4f4f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/GetAddChildConfig.js +++ /dev/null @@ -1,70 +0,0 @@ -const GetValue = Phaser.Utils.Objects.GetValue; - -var GetAddChildConfig = function (config, key, defaultValues) { - var proportion = GetValue(config, `proportion.${key}`, defaultValues.proportion); - var align = GetValue(config, `align.${key}`, 'center'); - var padding = GetValue(config, `space.${key}`, undefined); - if ((typeof (padding) === 'number') && defaultValues.paddingKey) { - var paddingNum = padding; - padding = {}; - padding[defaultValues.paddingKey] = paddingNum; - } - var expand = GetValue(config, `expand.${key}`, true); - - return { - proportion: proportion, - align: align, - padding: padding, - expand: expand, - } -} - -var GetAddHeaderConfig = function (config) { - return GetAddChildConfig(config, 'header', { - proportion: 0, - paddingKey: 'bottom' - }) -} - -var GetAddLeftSideConfig = function (config) { - return GetAddChildConfig(config, 'leftSide', { - proportion: 0, - paddingKey: 'right' - }) -} - -var GetAddContentConfig = function (config) { - return GetAddChildConfig(config, 'content', { - proportion: 1 - }) -} - -var GetAddRightSideConfig = function (config) { - return GetAddChildConfig(config, 'rightSide', { - proportion: 0, - paddingKey: 'left' - }) -} - -var GetAddFooterConfig = function (config) { - return GetAddChildConfig(config, 'footer', { - proportion: 0, - paddingKey: 'top' - }) -} - -var GetAddContainerConfig = function (config) { - return { - proportion: 1, - align: 'center', - padding: 0, - expand: true, - } -} - -export { - GetAddHeaderConfig, - GetAddLeftSideConfig, GetAddContentConfig, GetAddRightSideConfig, - GetAddFooterConfig, - GetAddContainerConfig -} diff --git a/spaces/AkitoP/umamusume_bert_vits2/train_ms_acc.py b/spaces/AkitoP/umamusume_bert_vits2/train_ms_acc.py deleted file mode 100644 index 0c86a796ce81978c6b5b56eecfb31c54df2dd773..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/train_ms_acc.py +++ /dev/null @@ -1,623 +0,0 @@ -# flake8: noqa: E402 - -import os -import torch -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler, -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import generator_loss, discriminator_loss, feature_loss, kl_loss -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = ( - True # If encontered training problem,please try to disable TF32. -) -torch.set_float32_matmul_precision("medium") -torch.backends.cudnn.benchmark = True -torch.backends.cuda.sdp_kernel("flash") -torch.backends.cuda.enable_flash_sdp(True) -torch.backends.cuda.enable_mem_efficient_sdp( - True -) # Not available if torch version is lower than 2.0 -torch.backends.cuda.enable_math_sdp(True) -global_step = 0 - - -def run(): - dist.init_process_group( - backend="gloo", - init_method='tcp://127.0.0.1:11451', # Due to some training problem,we proposed to use gloo instead of nccl. - rank=0, - world_size=1, - ) # Use torchrun instead of mp.spawn - rank = dist.get_rank() - n_gpus = dist.get_world_size() - hps = utils.get_hparams() - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader( - train_dataset, - num_workers=16, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler, - persistent_workers=True, - prefetch_factor=4, - ) # DataLoader config could be adjusted. - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader( - eval_dataset, - num_workers=0, - shuffle=False, - batch_size=1, - pin_memory=True, - drop_last=False, - collate_fn=collate_fn, - ) - if ( - "use_noise_scaled_mas" in hps.model.keys() - and hps.model.use_noise_scaled_mas is True - ): - print("Using noise scaled MAS for VITS2") - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if ( - "use_duration_discriminator" in hps.model.keys() - and hps.model.use_duration_discriminator is True - ): - print("Using duration discriminator for VITS2") - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if ( - "use_spk_conditioned_encoder" in hps.model.keys() - and hps.model.use_spk_conditioned_encoder is True - ): - if hps.data.n_speakers == 0: - raise ValueError( - "n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model" - ) - else: - print("Using normal encoder for VITS1") - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial=mas_noise_scale_initial, - noise_scale_delta=noise_scale_delta, - **hps.model, - ).cuda(rank) - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - try: - if net_dur_disc is not None: - _, _, dur_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), - net_dur_disc, - optim_dur_disc, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_g, g_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), - net_g, - optim_g, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_d, d_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), - net_d, - optim_d, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - if not optim_g.param_groups[0].get("initial_lr"): - optim_g.param_groups[0]["initial_lr"] = g_resume_lr - if not optim_d.param_groups[0].get("initial_lr"): - optim_d.param_groups[0]["initial_lr"] = d_resume_lr - if not optim_dur_disc.param_groups[0].get("initial_lr"): - optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - if net_dur_disc is not None: - if not optim_dur_disc.param_groups[0].get("initial_lr"): - optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR( - optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - - - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, eval_loader], - logger, - [writer, writer_eval], - ) - else: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, None], - None, - None, - ) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -__ACCUMULATION_STEP__ = 6 -__CURRENT_ACCUMULATION_STEP__ = 0 - -def train_and_evaluate( - rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers -): - global __ACCUMULATION_STEP__ - global __CURRENT_ACCUMULATION_STEP__ - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = ( - net_g.module.mas_noise_scale_initial - - net_g.module.noise_scale_delta * global_step - ) - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True - ) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda( - rank, non_blocking=True - ) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True - ) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - ja_bert = ja_bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - ( - y_hat, - l_length, - attn, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - (hidden_x, logw, logw_), - ) = net_g( - x, - x_lengths, - spec, - spec_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - - y = commons.slice_segments( - y, ids_slice * hps.data.hop_length, hps.train.segment_size - ) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g - ) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc( - hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach() - ) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - ( - loss_dur_disc, - losses_dur_disc_r, - losses_dur_disc_g, - ) = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - - - scaler.scale(loss_disc_all/__ACCUMULATION_STEP__).backward() - __CURRENT_ACCUMULATION_STEP__ += 1 - - if __CURRENT_ACCUMULATION_STEP__ == __ACCUMULATION_STEP__: - __CURRENT_ACCUMULATION_STEP__ = 0 - - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - optim_d.zero_grad() - - - - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - - - scaler.scale(loss_gen_all/__ACCUMULATION_STEP__).backward() - if __CURRENT_ACCUMULATION_STEP__ == __ACCUMULATION_STEP__: - __CURRENT_ACCUMULATION_STEP__ = 0 - - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - optim_g.zero_grad() - - - - - if rank == 0: - if (global_step-1) % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]["lr"] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info( - "Train Epoch: {} [{:.0f}%]".format( - epoch, 100.0 * batch_idx / len(train_loader) - ) - ) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc_all, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g, - } - scalar_dict.update( - { - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/dur": loss_dur, - "loss/g/kl": loss_kl, - } - ) - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)} - ) - scalar_dict.update( - {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)} - ) - scalar_dict.update( - {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)} - ) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy() - ), - "slice/mel_gen": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy() - ), - "all/mel": utils.plot_spectrogram_to_numpy( - mel[0].data.cpu().numpy() - ), - "all/attn": utils.plot_alignment_to_numpy( - attn[0, 0].data.cpu().numpy() - ), - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict, - ) - - if (global_step-1) % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), - ) - if net_dur_disc is not None: - utils.save_checkpoint( - net_dur_disc, - optim_dur_disc, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)), - ) - keep_ckpts = getattr(hps.train, "keep_ckpts", 5) - if keep_ckpts > 0: - utils.clean_checkpoints( - path_to_models=hps.model_dir, - n_ckpts_to_keep=keep_ckpts, - sort_by_time=True, - ) - - global_step += 1 - - if rank == 0: - logger.info("====> Epoch: {} ===>{}".format(epoch, __CURRENT_ACCUMULATION_STEP__)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - ja_bert = ja_bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer( - x, - x_lengths, - speakers, - tone, - language, - bert, - ja_bert, - y=spec, - max_len=1000, - sdp_ratio=0.0 if not use_sdp else 1.0, - ) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - image_dict.update( - { - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].cpu().numpy() - ) - } - ) - audio_dict.update( - { - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[ - 0, :, : y_hat_lengths[0] - ] - } - ) - image_dict.update( - { - f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - mel[0].cpu().numpy() - ) - } - ) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, : y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate, - ) - generator.train() - - -if __name__ == "__main__": - run() diff --git a/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/build.py b/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/build.py deleted file mode 100644 index 2928af83b2b34b8cbcaa1e1be7146d9fb58e5e7c..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/build.py +++ /dev/null @@ -1,19 +0,0 @@ -from modeling.ifrnet import IFRNet, Discriminator, PatchDiscriminator, MLP -from modeling.benchmark import UNet - - -def build_model(args): - if args.MODEL.NAME.lower() == "ifrnet": - net = IFRNet(base_n_channels=args.MODEL.IFR.NUM_CHANNELS, destyler_n_channels=args.MODEL.IFR.DESTYLER_CHANNELS) - mlp = MLP(base_n_channels=args.MODEL.IFR.NUM_CHANNELS, num_class=args.MODEL.NUM_CLASS) - elif args.MODEL.NAME.lower() == "ifr-no-aux": - net = IFRNet(base_n_channels=args.MODEL.IFR.NUM_CHANNELS, destyler_n_channels=args.MODEL.IFR.DESTYLER_CHANNELS) - mlp = None - else: - raise NotImplementedError - return net, mlp - - -def build_discriminators(args): - return Discriminator(base_n_channels=args.MODEL.D.NUM_CHANNELS), PatchDiscriminator(base_n_channels=args.MODEL.D.NUM_CHANNELS) - diff --git a/spaces/AlexWang/lama/saicinpainting/training/data/datasets.py b/spaces/AlexWang/lama/saicinpainting/training/data/datasets.py deleted file mode 100644 index c4f503dafffb970d8dbaca33934da417036d1e55..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/data/datasets.py +++ /dev/null @@ -1,304 +0,0 @@ -import glob -import logging -import os -import random - -import albumentations as A -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -import webdataset -from omegaconf import open_dict, OmegaConf -from skimage.feature import canny -from skimage.transform import rescale, resize -from torch.utils.data import Dataset, IterableDataset, DataLoader, DistributedSampler, ConcatDataset - -from saicinpainting.evaluation.data import InpaintingDataset as InpaintingEvaluationDataset, \ - OurInpaintingDataset as OurInpaintingEvaluationDataset, ceil_modulo, InpaintingEvalOnlineDataset -from saicinpainting.training.data.aug import IAAAffine2, IAAPerspective2 -from saicinpainting.training.data.masks import get_mask_generator - -LOGGER = logging.getLogger(__name__) - - -class InpaintingTrainDataset(Dataset): - def __init__(self, indir, mask_generator, transform): - self.in_files = list(glob.glob(os.path.join(indir, '**', '*.jpg'), recursive=True)) - self.mask_generator = mask_generator - self.transform = transform - self.iter_i = 0 - - def __len__(self): - return len(self.in_files) - - def __getitem__(self, item): - path = self.in_files[item] - img = cv2.imread(path) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = self.transform(image=img)['image'] - img = np.transpose(img, (2, 0, 1)) - # TODO: maybe generate mask before augmentations? slower, but better for segmentation-based masks - mask = self.mask_generator(img, iter_i=self.iter_i) - self.iter_i += 1 - return dict(image=img, - mask=mask) - - -class InpaintingTrainWebDataset(IterableDataset): - def __init__(self, indir, mask_generator, transform, shuffle_buffer=200): - self.impl = webdataset.Dataset(indir).shuffle(shuffle_buffer).decode('rgb').to_tuple('jpg') - self.mask_generator = mask_generator - self.transform = transform - - def __iter__(self): - for iter_i, (img,) in enumerate(self.impl): - img = np.clip(img * 255, 0, 255).astype('uint8') - img = self.transform(image=img)['image'] - img = np.transpose(img, (2, 0, 1)) - mask = self.mask_generator(img, iter_i=iter_i) - yield dict(image=img, - mask=mask) - - -class ImgSegmentationDataset(Dataset): - def __init__(self, indir, mask_generator, transform, out_size, segm_indir, semantic_seg_n_classes): - self.indir = indir - self.segm_indir = segm_indir - self.mask_generator = mask_generator - self.transform = transform - self.out_size = out_size - self.semantic_seg_n_classes = semantic_seg_n_classes - self.in_files = list(glob.glob(os.path.join(indir, '**', '*.jpg'), recursive=True)) - - def __len__(self): - return len(self.in_files) - - def __getitem__(self, item): - path = self.in_files[item] - img = cv2.imread(path) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = cv2.resize(img, (self.out_size, self.out_size)) - img = self.transform(image=img)['image'] - img = np.transpose(img, (2, 0, 1)) - mask = self.mask_generator(img) - segm, segm_classes= self.load_semantic_segm(path) - result = dict(image=img, - mask=mask, - segm=segm, - segm_classes=segm_classes) - return result - - def load_semantic_segm(self, img_path): - segm_path = img_path.replace(self.indir, self.segm_indir).replace(".jpg", ".png") - mask = cv2.imread(segm_path, cv2.IMREAD_GRAYSCALE) - mask = cv2.resize(mask, (self.out_size, self.out_size)) - tensor = torch.from_numpy(np.clip(mask.astype(int)-1, 0, None)) - ohe = F.one_hot(tensor.long(), num_classes=self.semantic_seg_n_classes) # w x h x n_classes - return ohe.permute(2, 0, 1).float(), tensor.unsqueeze(0) - - -def get_transforms(transform_variant, out_size): - if transform_variant == 'default': - transform = A.Compose([ - A.RandomScale(scale_limit=0.2), # +/- 20% - A.PadIfNeeded(min_height=out_size, min_width=out_size), - A.RandomCrop(height=out_size, width=out_size), - A.HorizontalFlip(), - A.CLAHE(), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2), - A.HueSaturationValue(hue_shift_limit=5, sat_shift_limit=30, val_shift_limit=5), - A.ToFloat() - ]) - elif transform_variant == 'distortions': - transform = A.Compose([ - IAAPerspective2(scale=(0.0, 0.06)), - IAAAffine2(scale=(0.7, 1.3), - rotate=(-40, 40), - shear=(-0.1, 0.1)), - A.PadIfNeeded(min_height=out_size, min_width=out_size), - A.OpticalDistortion(), - A.RandomCrop(height=out_size, width=out_size), - A.HorizontalFlip(), - A.CLAHE(), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2), - A.HueSaturationValue(hue_shift_limit=5, sat_shift_limit=30, val_shift_limit=5), - A.ToFloat() - ]) - elif transform_variant == 'distortions_scale05_1': - transform = A.Compose([ - IAAPerspective2(scale=(0.0, 0.06)), - IAAAffine2(scale=(0.5, 1.0), - rotate=(-40, 40), - shear=(-0.1, 0.1), - p=1), - A.PadIfNeeded(min_height=out_size, min_width=out_size), - A.OpticalDistortion(), - A.RandomCrop(height=out_size, width=out_size), - A.HorizontalFlip(), - A.CLAHE(), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2), - A.HueSaturationValue(hue_shift_limit=5, sat_shift_limit=30, val_shift_limit=5), - A.ToFloat() - ]) - elif transform_variant == 'distortions_scale03_12': - transform = A.Compose([ - IAAPerspective2(scale=(0.0, 0.06)), - IAAAffine2(scale=(0.3, 1.2), - rotate=(-40, 40), - shear=(-0.1, 0.1), - p=1), - A.PadIfNeeded(min_height=out_size, min_width=out_size), - A.OpticalDistortion(), - A.RandomCrop(height=out_size, width=out_size), - A.HorizontalFlip(), - A.CLAHE(), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2), - A.HueSaturationValue(hue_shift_limit=5, sat_shift_limit=30, val_shift_limit=5), - A.ToFloat() - ]) - elif transform_variant == 'distortions_scale03_07': - transform = A.Compose([ - IAAPerspective2(scale=(0.0, 0.06)), - IAAAffine2(scale=(0.3, 0.7), # scale 512 to 256 in average - rotate=(-40, 40), - shear=(-0.1, 0.1), - p=1), - A.PadIfNeeded(min_height=out_size, min_width=out_size), - A.OpticalDistortion(), - A.RandomCrop(height=out_size, width=out_size), - A.HorizontalFlip(), - A.CLAHE(), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2), - A.HueSaturationValue(hue_shift_limit=5, sat_shift_limit=30, val_shift_limit=5), - A.ToFloat() - ]) - elif transform_variant == 'distortions_light': - transform = A.Compose([ - IAAPerspective2(scale=(0.0, 0.02)), - IAAAffine2(scale=(0.8, 1.8), - rotate=(-20, 20), - shear=(-0.03, 0.03)), - A.PadIfNeeded(min_height=out_size, min_width=out_size), - A.RandomCrop(height=out_size, width=out_size), - A.HorizontalFlip(), - A.CLAHE(), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2), - A.HueSaturationValue(hue_shift_limit=5, sat_shift_limit=30, val_shift_limit=5), - A.ToFloat() - ]) - elif transform_variant == 'non_space_transform': - transform = A.Compose([ - A.CLAHE(), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2), - A.HueSaturationValue(hue_shift_limit=5, sat_shift_limit=30, val_shift_limit=5), - A.ToFloat() - ]) - elif transform_variant == 'no_augs': - transform = A.Compose([ - A.ToFloat() - ]) - else: - raise ValueError(f'Unexpected transform_variant {transform_variant}') - return transform - - -def make_default_train_dataloader(indir, kind='default', out_size=512, mask_gen_kwargs=None, transform_variant='default', - mask_generator_kind="mixed", dataloader_kwargs=None, ddp_kwargs=None, **kwargs): - LOGGER.info(f'Make train dataloader {kind} from {indir}. Using mask generator={mask_generator_kind}') - - mask_generator = get_mask_generator(kind=mask_generator_kind, kwargs=mask_gen_kwargs) - transform = get_transforms(transform_variant, out_size) - - if kind == 'default': - dataset = InpaintingTrainDataset(indir=indir, - mask_generator=mask_generator, - transform=transform, - **kwargs) - elif kind == 'default_web': - dataset = InpaintingTrainWebDataset(indir=indir, - mask_generator=mask_generator, - transform=transform, - **kwargs) - elif kind == 'img_with_segm': - dataset = ImgSegmentationDataset(indir=indir, - mask_generator=mask_generator, - transform=transform, - out_size=out_size, - **kwargs) - else: - raise ValueError(f'Unknown train dataset kind {kind}') - - if dataloader_kwargs is None: - dataloader_kwargs = {} - - is_dataset_only_iterable = kind in ('default_web',) - - if ddp_kwargs is not None and not is_dataset_only_iterable: - dataloader_kwargs['shuffle'] = False - dataloader_kwargs['sampler'] = DistributedSampler(dataset, **ddp_kwargs) - - if is_dataset_only_iterable and 'shuffle' in dataloader_kwargs: - with open_dict(dataloader_kwargs): - del dataloader_kwargs['shuffle'] - - dataloader = DataLoader(dataset, **dataloader_kwargs) - return dataloader - - -def make_default_val_dataset(indir, kind='default', out_size=512, transform_variant='default', **kwargs): - if OmegaConf.is_list(indir) or isinstance(indir, (tuple, list)): - return ConcatDataset([ - make_default_val_dataset(idir, kind=kind, out_size=out_size, transform_variant=transform_variant, **kwargs) for idir in indir - ]) - - LOGGER.info(f'Make val dataloader {kind} from {indir}') - mask_generator = get_mask_generator(kind=kwargs.get("mask_generator_kind"), kwargs=kwargs.get("mask_gen_kwargs")) - - if transform_variant is not None: - transform = get_transforms(transform_variant, out_size) - - if kind == 'default': - dataset = InpaintingEvaluationDataset(indir, **kwargs) - elif kind == 'our_eval': - dataset = OurInpaintingEvaluationDataset(indir, **kwargs) - elif kind == 'img_with_segm': - dataset = ImgSegmentationDataset(indir=indir, - mask_generator=mask_generator, - transform=transform, - out_size=out_size, - **kwargs) - elif kind == 'online': - dataset = InpaintingEvalOnlineDataset(indir=indir, - mask_generator=mask_generator, - transform=transform, - out_size=out_size, - **kwargs) - else: - raise ValueError(f'Unknown val dataset kind {kind}') - - return dataset - - -def make_default_val_dataloader(*args, dataloader_kwargs=None, **kwargs): - dataset = make_default_val_dataset(*args, **kwargs) - - if dataloader_kwargs is None: - dataloader_kwargs = {} - dataloader = DataLoader(dataset, **dataloader_kwargs) - return dataloader - - -def make_constant_area_crop_params(img_height, img_width, min_size=128, max_size=512, area=256*256, round_to_mod=16): - min_size = min(img_height, img_width, min_size) - max_size = min(img_height, img_width, max_size) - if random.random() < 0.5: - out_height = min(max_size, ceil_modulo(random.randint(min_size, max_size), round_to_mod)) - out_width = min(max_size, ceil_modulo(area // out_height, round_to_mod)) - else: - out_width = min(max_size, ceil_modulo(random.randint(min_size, max_size), round_to_mod)) - out_height = min(max_size, ceil_modulo(area // out_width, round_to_mod)) - - start_y = random.randint(0, img_height - out_height) - start_x = random.randint(0, img_width - out_width) - return (start_y, start_x, out_height, out_width) diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_functions_test.py b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_functions_test.py deleted file mode 100644 index 2838e543977e94c13791a681a5a6b9bb8f4110dc..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_functions_test.py +++ /dev/null @@ -1,92 +0,0 @@ -""" -这是什么? - 这个文件用于函数插件的单元测试 - 运行方法 python crazy_functions/crazy_functions_test.py -""" - -def validate_path(): - import os, sys - dir_name = os.path.dirname(__file__) - root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..') - os.chdir(root_dir_assume) - sys.path.append(root_dir_assume) - -validate_path() # validate path so you can run from base directory - -from toolbox import get_conf, ChatBotWithCookies -proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - -llm_kwargs = { - 'api_key': API_KEY, - 'llm_model': LLM_MODEL, - 'top_p':1.0, - 'max_length': None, - 'temperature':1.0, -} -plugin_kwargs = { } -chatbot = ChatBotWithCookies(llm_kwargs) -history = [] -system_prompt = "Serve me as a writing and programming assistant." -web_port = 1024 - - -def test_解析一个Python项目(): - from crazy_functions.解析项目源代码 import 解析一个Python项目 - txt = "crazy_functions/test_project/python/dqn" - for cookies, cb, hist, msg in 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -def test_解析一个Cpp项目(): - from crazy_functions.解析项目源代码 import 解析一个C项目 - txt = "crazy_functions/test_project/cpp/cppipc" - for cookies, cb, hist, msg in 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -def test_Latex英文润色(): - from crazy_functions.Latex全文润色 import Latex英文润色 - txt = "crazy_functions/test_project/latex/attention" - for cookies, cb, hist, msg in Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -def test_Markdown中译英(): - from crazy_functions.批量Markdown翻译 import Markdown中译英 - txt = "README.md" - for cookies, cb, hist, msg in Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -def test_批量翻译PDF文档(): - from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档 - txt = "crazy_functions/test_project/pdf_and_word" - for cookies, cb, hist, msg in 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -def test_谷歌检索小助手(): - from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 - txt = "https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=" - for cookies, cb, hist, msg in 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -def test_总结word文档(): - from crazy_functions.总结word文档 import 总结word文档 - txt = "crazy_functions/test_project/pdf_and_word" - for cookies, cb, hist, msg in 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -def test_下载arxiv论文并翻译摘要(): - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - txt = "1812.10695" - for cookies, cb, hist, msg in 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - print(cb) - -test_解析一个Python项目() -test_Latex英文润色() -test_Markdown中译英() -test_批量翻译PDF文档() -test_谷歌检索小助手() -test_总结word文档() -test_下载arxiv论文并翻译摘要() -test_解析一个Cpp项目() - -input("程序完成,回车退出。") -print("退出。") \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/latent_codes_pool.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/latent_codes_pool.py deleted file mode 100644 index 0281d4b5e80f8eb26e824fa35b4f908dcb6634e6..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/latent_codes_pool.py +++ /dev/null @@ -1,55 +0,0 @@ -import random -import torch - - -class LatentCodesPool: - """This class implements latent codes buffer that stores previously generated w latent codes. - This buffer enables us to update discriminators using a history of generated w's - rather than the ones produced by the latest encoder. - """ - - def __init__(self, pool_size): - """Initialize the ImagePool class - Parameters: - pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created - """ - self.pool_size = pool_size - if self.pool_size > 0: # create an empty pool - self.num_ws = 0 - self.ws = [] - - def query(self, ws): - """Return w's from the pool. - Parameters: - ws: the latest generated w's from the generator - Returns w's from the buffer. - By 50/100, the buffer will return input w's. - By 50/100, the buffer will return w's previously stored in the buffer, - and insert the current w's to the buffer. - """ - if self.pool_size == 0: # if the buffer size is 0, do nothing - return ws - return_ws = [] - for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512) - # w = torch.unsqueeze(image.data, 0) - if w.ndim == 2: - i = random.randint(0, len(w) - 1) # apply a random latent index as a candidate - w = w[i] - self.handle_w(w, return_ws) - return_ws = torch.stack(return_ws, 0) # collect all the images and return - return return_ws - - def handle_w(self, w, return_ws): - if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer - self.num_ws = self.num_ws + 1 - self.ws.append(w) - return_ws.append(w) - else: - p = random.uniform(0, 1) - if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer - random_id = random.randint(0, self.pool_size - 1) # randint is inclusive - tmp = self.ws[random_id].clone() - self.ws[random_id] = w - return_ws.append(tmp) - else: # by another 50% chance, the buffer will return the current image - return_ws.append(w) diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.h b/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.h deleted file mode 100644 index d0246aa06c3dcd5919111fdc914136014b9044b5..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.h +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright (c) SenseTime Research. All rights reserved. - -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct bias_act_kernel_params -{ - const void* x; // [sizeX] - const void* b; // [sizeB] or NULL - const void* xref; // [sizeX] or NULL - const void* yref; // [sizeX] or NULL - const void* dy; // [sizeX] or NULL - void* y; // [sizeX] - - int grad; - int act; - float alpha; - float gain; - float clamp; - - int sizeX; - int sizeB; - int stepB; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template void* choose_bias_act_kernel(const bias_act_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py deleted file mode 100644 index 8bdd01fe748e411c032dbb5316c6761ecd5716e5..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py +++ /dev/null @@ -1,914 +0,0 @@ -import html -import inspect -import re -import urllib.parse as ul -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -from transformers import CLIPImageProcessor, T5EncoderModel, T5Tokenizer - -from ...loaders import LoraLoaderMixin -from ...models import UNet2DConditionModel -from ...schedulers import DDPMScheduler -from ...utils import ( - BACKENDS_MAPPING, - is_accelerate_available, - is_accelerate_version, - is_bs4_available, - is_ftfy_available, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline -from . import IFPipelineOutput -from .safety_checker import IFSafetyChecker -from .watermark import IFWatermarker - - -if is_bs4_available(): - from bs4 import BeautifulSoup - -if is_ftfy_available(): - import ftfy - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline - >>> from diffusers.utils import pt_to_pil - >>> import torch - - >>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) - >>> pipe.enable_model_cpu_offload() - - >>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' - >>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) - - >>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images - - >>> # save intermediate image - >>> pil_image = pt_to_pil(image) - >>> pil_image[0].save("./if_stage_I.png") - - >>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( - ... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 - ... ) - >>> super_res_1_pipe.enable_model_cpu_offload() - - >>> image = super_res_1_pipe( - ... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds - ... ).images - >>> image[0].save("./if_stage_II.png") - ``` -""" - - -class IFSuperResolutionPipeline(DiffusionPipeline, LoraLoaderMixin): - tokenizer: T5Tokenizer - text_encoder: T5EncoderModel - - unet: UNet2DConditionModel - scheduler: DDPMScheduler - image_noising_scheduler: DDPMScheduler - - feature_extractor: Optional[CLIPImageProcessor] - safety_checker: Optional[IFSafetyChecker] - - watermarker: Optional[IFWatermarker] - - bad_punct_regex = re.compile( - r"[" + "#®•©™&@·º½¾¿¡§~" + "\)" + "\(" + "\]" + "\[" + "\}" + "\{" + "\|" + "\\" + "\/" + "\*" + r"]{1,}" - ) # noqa - - _optional_components = ["tokenizer", "text_encoder", "safety_checker", "feature_extractor", "watermarker"] - - def __init__( - self, - tokenizer: T5Tokenizer, - text_encoder: T5EncoderModel, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - image_noising_scheduler: DDPMScheduler, - safety_checker: Optional[IFSafetyChecker], - feature_extractor: Optional[CLIPImageProcessor], - watermarker: Optional[IFWatermarker], - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the IF license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - if unet.config.in_channels != 6: - logger.warn( - "It seems like you have loaded a checkpoint that shall not be used for super resolution from {unet.config._name_or_path} as it accepts {unet.config.in_channels} input channels instead of 6. Please make sure to pass a super resolution checkpoint as the `'unet'`: IFSuperResolutionPipeline.from_pretrained(unet=super_resolution_unet, ...)`." - ) - - self.register_modules( - tokenizer=tokenizer, - text_encoder=text_encoder, - unet=unet, - scheduler=scheduler, - image_noising_scheduler=image_noising_scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - watermarker=watermarker, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.enable_model_cpu_offload - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - - if self.text_encoder is not None: - _, hook = cpu_offload_with_hook(self.text_encoder, device, prev_module_hook=hook) - - # Accelerate will move the next model to the device _before_ calling the offload hook of the - # previous model. This will cause both models to be present on the device at the same time. - # IF uses T5 for its text encoder which is really large. We can manually call the offload - # hook for the text encoder to ensure it's moved to the cpu before the unet is moved to - # the GPU. - self.text_encoder_offload_hook = hook - - _, hook = cpu_offload_with_hook(self.unet, device, prev_module_hook=hook) - - # if the safety checker isn't called, `unet_offload_hook` will have to be called to manually offload the unet - self.unet_offload_hook = hook - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.remove_all_hooks - def remove_all_hooks(self): - if is_accelerate_available(): - from accelerate.hooks import remove_hook_from_module - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - for model in [self.text_encoder, self.unet, self.safety_checker]: - if model is not None: - remove_hook_from_module(model, recurse=True) - - self.unet_offload_hook = None - self.text_encoder_offload_hook = None - self.final_offload_hook = None - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._text_preprocessing - def _text_preprocessing(self, text, clean_caption=False): - if clean_caption and not is_bs4_available(): - logger.warn(BACKENDS_MAPPING["bs4"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if clean_caption and not is_ftfy_available(): - logger.warn(BACKENDS_MAPPING["ftfy"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if not isinstance(text, (tuple, list)): - text = [text] - - def process(text: str): - if clean_caption: - text = self._clean_caption(text) - text = self._clean_caption(text) - else: - text = text.lower().strip() - return text - - return [process(t) for t in text] - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._clean_caption - def _clean_caption(self, caption): - caption = str(caption) - caption = ul.unquote_plus(caption) - caption = caption.strip().lower() - caption = re.sub("", "person", caption) - # urls: - caption = re.sub( - r"\b((?:https?:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - caption = re.sub( - r"\b((?:www:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - # html: - caption = BeautifulSoup(caption, features="html.parser").text - - # @ - caption = re.sub(r"@[\w\d]+\b", "", caption) - - # 31C0—31EF CJK Strokes - # 31F0—31FF Katakana Phonetic Extensions - # 3200—32FF Enclosed CJK Letters and Months - # 3300—33FF CJK Compatibility - # 3400—4DBF CJK Unified Ideographs Extension A - # 4DC0—4DFF Yijing Hexagram Symbols - # 4E00—9FFF CJK Unified Ideographs - caption = re.sub(r"[\u31c0-\u31ef]+", "", caption) - caption = re.sub(r"[\u31f0-\u31ff]+", "", caption) - caption = re.sub(r"[\u3200-\u32ff]+", "", caption) - caption = re.sub(r"[\u3300-\u33ff]+", "", caption) - caption = re.sub(r"[\u3400-\u4dbf]+", "", caption) - caption = re.sub(r"[\u4dc0-\u4dff]+", "", caption) - caption = re.sub(r"[\u4e00-\u9fff]+", "", caption) - ####################################################### - - # все виды тире / all types of dash --> "-" - caption = re.sub( - r"[\u002D\u058A\u05BE\u1400\u1806\u2010-\u2015\u2E17\u2E1A\u2E3A\u2E3B\u2E40\u301C\u3030\u30A0\uFE31\uFE32\uFE58\uFE63\uFF0D]+", # noqa - "-", - caption, - ) - - # кавычки к одному стандарту - caption = re.sub(r"[`´«»“”¨]", '"', caption) - caption = re.sub(r"[‘’]", "'", caption) - - # " - caption = re.sub(r""?", "", caption) - # & - caption = re.sub(r"&", "", caption) - - # ip adresses: - caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption) - - # article ids: - caption = re.sub(r"\d:\d\d\s+$", "", caption) - - # \n - caption = re.sub(r"\\n", " ", caption) - - # "#123" - caption = re.sub(r"#\d{1,3}\b", "", caption) - # "#12345.." - caption = re.sub(r"#\d{5,}\b", "", caption) - # "123456.." - caption = re.sub(r"\b\d{6,}\b", "", caption) - # filenames: - caption = re.sub(r"[\S]+\.(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)", "", caption) - - # - caption = re.sub(r"[\"\']{2,}", r'"', caption) # """AUSVERKAUFT""" - caption = re.sub(r"[\.]{2,}", r" ", caption) # """AUSVERKAUFT""" - - caption = re.sub(self.bad_punct_regex, r" ", caption) # ***AUSVERKAUFT***, #AUSVERKAUFT - caption = re.sub(r"\s+\.\s+", r" ", caption) # " . " - - # this-is-my-cute-cat / this_is_my_cute_cat - regex2 = re.compile(r"(?:\-|\_)") - if len(re.findall(regex2, caption)) > 3: - caption = re.sub(regex2, " ", caption) - - caption = ftfy.fix_text(caption) - caption = html.unescape(html.unescape(caption)) - - caption = re.sub(r"\b[a-zA-Z]{1,3}\d{3,15}\b", "", caption) # jc6640 - caption = re.sub(r"\b[a-zA-Z]+\d+[a-zA-Z]+\b", "", caption) # jc6640vc - caption = re.sub(r"\b\d+[a-zA-Z]+\d+\b", "", caption) # 6640vc231 - - caption = re.sub(r"(worldwide\s+)?(free\s+)?shipping", "", caption) - caption = re.sub(r"(free\s)?download(\sfree)?", "", caption) - caption = re.sub(r"\bclick\b\s(?:for|on)\s\w+", "", caption) - caption = re.sub(r"\b(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)(\simage[s]?)?", "", caption) - caption = re.sub(r"\bpage\s+\d+\b", "", caption) - - caption = re.sub(r"\b\d*[a-zA-Z]+\d+[a-zA-Z]+\d+[a-zA-Z\d]*\b", r" ", caption) # j2d1a2a... - - caption = re.sub(r"\b\d+\.?\d*[xх×]\d+\.?\d*\b", "", caption) - - caption = re.sub(r"\b\s+\:\s+", r": ", caption) - caption = re.sub(r"(\D[,\./])\b", r"\1 ", caption) - caption = re.sub(r"\s+", " ", caption) - - caption.strip() - - caption = re.sub(r"^[\"\']([\w\W]+)[\"\']$", r"\1", caption) - caption = re.sub(r"^[\'\_,\-\:;]", r"", caption) - caption = re.sub(r"[\'\_,\-\:\-\+]$", r"", caption) - caption = re.sub(r"^\.\S+$", "", caption) - - return caption.strip() - - @torch.no_grad() - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.encode_prompt - def encode_prompt( - self, - prompt, - do_classifier_free_guidance=True, - num_images_per_prompt=1, - device=None, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - clean_caption: bool = False, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`, *optional*): - torch device to place the resulting embeddings on - num_images_per_prompt (`int`, *optional*, defaults to 1): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and negative_prompt is not None: - if type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - - if device is None: - device = self._execution_device - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - # while T5 can handle much longer input sequences than 77, the text encoder was trained with a max length of 77 for IF - max_length = 77 - - if prompt_embeds is None: - prompt = self._text_preprocessing(prompt, clean_caption=clean_caption) - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=max_length, - truncation=True, - add_special_tokens=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {max_length} tokens: {removed_text}" - ) - - attention_mask = text_inputs.attention_mask.to(device) - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - if self.text_encoder is not None: - dtype = self.text_encoder.dtype - elif self.unet is not None: - dtype = self.unet.dtype - else: - dtype = None - - prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - uncond_tokens = self._text_preprocessing(uncond_tokens, clean_caption=clean_caption) - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - attention_mask = uncond_input.attention_mask.to(device) - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - else: - negative_prompt_embeds = None - - return prompt_embeds, negative_prompt_embeds - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, nsfw_detected, watermark_detected = self.safety_checker( - images=image, - clip_input=safety_checker_input.pixel_values.to(dtype=dtype), - ) - else: - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - - return image, nsfw_detected, watermark_detected - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - image, - batch_size, - noise_level, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - if noise_level < 0 or noise_level >= self.image_noising_scheduler.config.num_train_timesteps: - raise ValueError( - f"`noise_level`: {noise_level} must be a valid timestep in `self.noising_scheduler`, [0, {self.image_noising_scheduler.config.num_train_timesteps})" - ) - - if isinstance(image, list): - check_image_type = image[0] - else: - check_image_type = image - - if ( - not isinstance(check_image_type, torch.Tensor) - and not isinstance(check_image_type, PIL.Image.Image) - and not isinstance(check_image_type, np.ndarray) - ): - raise ValueError( - "`image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is" - f" {type(check_image_type)}" - ) - - if isinstance(image, list): - image_batch_size = len(image) - elif isinstance(image, torch.Tensor): - image_batch_size = image.shape[0] - elif isinstance(image, PIL.Image.Image): - image_batch_size = 1 - elif isinstance(image, np.ndarray): - image_batch_size = image.shape[0] - else: - assert False - - if batch_size != image_batch_size: - raise ValueError(f"image batch size: {image_batch_size} must be same as prompt batch size {batch_size}") - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.prepare_intermediate_images - def prepare_intermediate_images(self, batch_size, num_channels, height, width, dtype, device, generator): - shape = (batch_size, num_channels, height, width) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - intermediate_images = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # scale the initial noise by the standard deviation required by the scheduler - intermediate_images = intermediate_images * self.scheduler.init_noise_sigma - return intermediate_images - - def preprocess_image(self, image, num_images_per_prompt, device): - if not isinstance(image, torch.Tensor) and not isinstance(image, list): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - image = [np.array(i).astype(np.float32) / 127.5 - 1.0 for i in image] - - image = np.stack(image, axis=0) # to np - image = torch.from_numpy(image.transpose(0, 3, 1, 2)) - elif isinstance(image[0], np.ndarray): - image = np.stack(image, axis=0) # to np - if image.ndim == 5: - image = image[0] - - image = torch.from_numpy(image.transpose(0, 3, 1, 2)) - elif isinstance(image, list) and isinstance(image[0], torch.Tensor): - dims = image[0].ndim - - if dims == 3: - image = torch.stack(image, dim=0) - elif dims == 4: - image = torch.concat(image, dim=0) - else: - raise ValueError(f"Image must have 3 or 4 dimensions, instead got {dims}") - - image = image.to(device=device, dtype=self.unet.dtype) - - image = image.repeat_interleave(num_images_per_prompt, dim=0) - - return image - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - height: int = None, - width: int = None, - image: Union[PIL.Image.Image, np.ndarray, torch.FloatTensor] = None, - num_inference_steps: int = 50, - timesteps: List[int] = None, - guidance_scale: float = 4.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - noise_level: int = 250, - clean_caption: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - height (`int`, *optional*, defaults to self.unet.config.sample_size): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size): - The width in pixels of the generated image. - image (`PIL.Image.Image`, `np.ndarray`, `torch.FloatTensor`): - The image to be upscaled. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - timesteps (`List[int]`, *optional*): - Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps` - timesteps are used. Must be in descending order. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.IFPipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - noise_level (`int`, *optional*, defaults to 250): - The amount of noise to add to the upscaled image. Must be in the range `[0, 1000)` - clean_caption (`bool`, *optional*, defaults to `True`): - Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to - be installed. If the dependencies are not installed, the embeddings will be created from the raw - prompt. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.IFPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.IFPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When - returning a tuple, the first element is a list with the generated images, and the second element is a list - of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) - or watermarked content, according to the `safety_checker`. - """ - # 1. Check inputs. Raise error if not correct - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - self.check_inputs( - prompt, - image, - batch_size, - noise_level, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - ) - - # 2. Define call parameters - - height = height or self.unet.config.sample_size - width = width or self.unet.config.sample_size - - device = self._execution_device - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds, negative_prompt_embeds = self.encode_prompt( - prompt, - do_classifier_free_guidance, - num_images_per_prompt=num_images_per_prompt, - device=device, - negative_prompt=negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - clean_caption=clean_caption, - ) - - if do_classifier_free_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - # 4. Prepare timesteps - if timesteps is not None: - self.scheduler.set_timesteps(timesteps=timesteps, device=device) - timesteps = self.scheduler.timesteps - num_inference_steps = len(timesteps) - else: - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare intermediate images - num_channels = self.unet.config.in_channels // 2 - intermediate_images = self.prepare_intermediate_images( - batch_size * num_images_per_prompt, - num_channels, - height, - width, - prompt_embeds.dtype, - device, - generator, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Prepare upscaled image and noise level - image = self.preprocess_image(image, num_images_per_prompt, device) - upscaled = F.interpolate(image, (height, width), mode="bilinear", align_corners=True) - - noise_level = torch.tensor([noise_level] * upscaled.shape[0], device=upscaled.device) - noise = randn_tensor(upscaled.shape, generator=generator, device=upscaled.device, dtype=upscaled.dtype) - upscaled = self.image_noising_scheduler.add_noise(upscaled, noise, timesteps=noise_level) - - if do_classifier_free_guidance: - noise_level = torch.cat([noise_level] * 2) - - # HACK: see comment in `enable_model_cpu_offload` - if hasattr(self, "text_encoder_offload_hook") and self.text_encoder_offload_hook is not None: - self.text_encoder_offload_hook.offload() - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - model_input = torch.cat([intermediate_images, upscaled], dim=1) - - model_input = torch.cat([model_input] * 2) if do_classifier_free_guidance else model_input - model_input = self.scheduler.scale_model_input(model_input, t) - - # predict the noise residual - noise_pred = self.unet( - model_input, - t, - encoder_hidden_states=prompt_embeds, - class_labels=noise_level, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_uncond, _ = noise_pred_uncond.split(model_input.shape[1] // 2, dim=1) - noise_pred_text, predicted_variance = noise_pred_text.split(model_input.shape[1] // 2, dim=1) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, predicted_variance], dim=1) - - if self.scheduler.config.variance_type not in ["learned", "learned_range"]: - noise_pred, _ = noise_pred.split(intermediate_images.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - intermediate_images = self.scheduler.step( - noise_pred, t, intermediate_images, **extra_step_kwargs, return_dict=False - )[0] - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, intermediate_images) - - image = intermediate_images - - if output_type == "pil": - # 9. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 10. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 11. Convert to PIL - image = self.numpy_to_pil(image) - - # 12. Apply watermark - if self.watermarker is not None: - self.watermarker.apply_watermark(image, self.unet.config.sample_size) - elif output_type == "pt": - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - else: - # 9. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 10. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, nsfw_detected, watermark_detected) - - return IFPipelineOutput(images=image, nsfw_detected=nsfw_detected, watermark_detected=watermark_detected) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sabl/README.md b/spaces/Andy1621/uniformer_image_detection/configs/sabl/README.md deleted file mode 100644 index 34b8367d3665358a7cd3a9cc906b80b8a2cb3fe1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/sabl/README.md +++ /dev/null @@ -1,37 +0,0 @@ -# Side-Aware Boundary Localization for More Precise Object Detection - -## Introduction - -[ALGORITHM] - -We provide config files to reproduce the object detection results in the ECCV 2020 Spotlight paper for [Side-Aware Boundary Localization for More Precise Object Detection](https://arxiv.org/abs/1912.04260). - -```latex -@inproceedings{Wang_2020_ECCV, - title = {Side-Aware Boundary Localization for More Precise Object Detection}, - author = {Jiaqi Wang and Wenwei Zhang and Yuhang Cao and Kai Chen and Jiangmiao Pang and Tao Gong and Jianping Shi and Chen Change Loy and Dahua Lin}, - booktitle = {ECCV}, - year = {2020} -} -``` - -## Results and Models - -The results on COCO 2017 val is shown in the below table. (results on test-dev are usually slightly higher than val). -Single-scale testing (1333x800) is adopted in all results. - -| Method | Backbone | Lr schd | ms-train | box AP | Config | Download | -| :----------------: | :-------: | :-----: | :------: | :----: | :----------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| SABL Faster R-CNN | R-50-FPN | 1x | N | 39.9 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_faster_rcnn_r50_fpn_1x_coco/sabl_faster_rcnn_r50_fpn_1x_coco-e867595b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_faster_rcnn_r50_fpn_1x_coco/20200830_130324.log.json) | -| SABL Faster R-CNN | R-101-FPN | 1x | N | 41.7 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_faster_rcnn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_faster_rcnn_r101_fpn_1x_coco/sabl_faster_rcnn_r101_fpn_1x_coco-f804c6c1.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_faster_rcnn_r101_fpn_1x_coco/20200830_183949.log.json) | -| SABL Cascade R-CNN | R-50-FPN | 1x | N | 41.6 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco/sabl_cascade_rcnn_r50_fpn_1x_coco-e1748e5e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco/20200831_033726.log.json) | -| SABL Cascade R-CNN | R-101-FPN | 1x | N | 43.0 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_cascade_rcnn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_cascade_rcnn_r101_fpn_1x_coco/sabl_cascade_rcnn_r101_fpn_1x_coco-2b83e87c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_cascade_rcnn_r101_fpn_1x_coco/20200831_141745.log.json) | - -| Method | Backbone | GN | Lr schd | ms-train | box AP | Config | Download | -| :------------: | :-------: | :---: | :-----: | :---------: | :----: | :---------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| SABL RetinaNet | R-50-FPN | N | 1x | N | 37.7 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_retinanet_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r50_fpn_1x_coco/sabl_retinanet_r50_fpn_1x_coco-6c54fd4f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r50_fpn_1x_coco/20200830_053451.log.json) | -| SABL RetinaNet | R-50-FPN | Y | 1x | N | 38.8 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_retinanet_r50_fpn_gn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r50_fpn_gn_1x_coco/sabl_retinanet_r50_fpn_gn_1x_coco-e16dfcf1.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r50_fpn_gn_1x_coco/20200831_141955.log.json) | -| SABL RetinaNet | R-101-FPN | N | 1x | N | 39.7 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_retinanet_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_1x_coco/sabl_retinanet_r101_fpn_1x_coco-42026904.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_1x_coco/20200831_034256.log.json) | -| SABL RetinaNet | R-101-FPN | Y | 1x | N | 40.5 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_retinanet_r101_fpn_gn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_gn_1x_coco/sabl_retinanet_r101_fpn_gn_1x_coco-40a893e8.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_gn_1x_coco/20200830_201422.log.json) | -| SABL RetinaNet | R-101-FPN | Y | 2x | Y (640~800) | 42.9 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco-1e63382c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco/20200830_144807.log.json) | -| SABL RetinaNet | R-101-FPN | Y | 2x | Y (480~960) | 43.6 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_480_960_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_480_960_coco/sabl_retinanet_r101_fpn_gn_2x_ms_480_960_coco-5342f857.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_480_960_coco/20200830_164537.log.json) | diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_20k.py deleted file mode 100644 index bf780a1b6f6521833c6a5859675147824efa599d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_20k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=20000) -checkpoint_config = dict(by_epoch=False, interval=2000) -evaluation = dict(interval=2000, metric='mIoU') diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/collect_env.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/collect_env.py deleted file mode 100644 index 65c2134ddbee9655161237dd0894d38c768c2624..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/collect_env.py +++ /dev/null @@ -1,17 +0,0 @@ -from annotator.uniformer.mmcv.utils import collect_env as collect_base_env -from annotator.uniformer.mmcv.utils import get_git_hash - -import annotator.uniformer.mmseg as mmseg - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMSegmentation'] = f'{mmseg.__version__}+{get_git_hash()[:7]}' - - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print('{}: {}'.format(name, val)) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/transforms.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/transforms.py deleted file mode 100644 index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/transforms.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/Ashrafb/codellama-34b/style.css b/spaces/Ashrafb/codellama-34b/style.css deleted file mode 100644 index 303c3d7ef3b06c42b211797cd2d5af9800589092..0000000000000000000000000000000000000000 --- a/spaces/Ashrafb/codellama-34b/style.css +++ /dev/null @@ -1,16 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: white; - background: #1565c0; - border-radius: 100vh; -} - -#component-0 { - max-width: 900px; - margin: auto; - padding-top: 1.5rem; -} diff --git a/spaces/Audio-AGI/WavJourney/VoiceParser/model.py b/spaces/Audio-AGI/WavJourney/VoiceParser/model.py deleted file mode 100644 index 1fd265241d43953a04578a43fe0248dd2233348a..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/WavJourney/VoiceParser/model.py +++ /dev/null @@ -1,102 +0,0 @@ -import os -import json -import numpy as np - -import torch -import torchaudio -torchaudio.set_audio_backend("soundfile") # Use 'soundfile' backend - -from encodec import EncodecModel -from encodec.utils import convert_audio -from .hubert_manager import HuBERTManager -from .pre_kmeans_hubert import CustomHubert -from .customtokenizer import CustomTokenizer - -class VoiceParser(): - def __init__(self, device='cpu'): - model = ('quantifier_hubert_base_ls960_14.pth', 'tokenizer.pth') - - hubert_model = CustomHubert(HuBERTManager.make_sure_hubert_installed(), device=device) - quant_model = CustomTokenizer.load_from_checkpoint(HuBERTManager.make_sure_tokenizer_installed(model=model[0], local_file=model[1]), device) - encodec_model = EncodecModel.encodec_model_24khz() - encodec_model.set_target_bandwidth(6.0) - - self.hubert_model = hubert_model - self.quant_model = quant_model - self.encodec_model = encodec_model.to(device) - self.device = device - print('Loaded VoiceParser models!') - - - def extract_acoustic_embed(self, wav_path, npz_dir): - wav, sr = torchaudio.load(wav_path) - - wav_hubert = wav.to(self.device) - - if wav_hubert.shape[0] == 2: # Stereo to mono if needed - wav_hubert = wav_hubert.mean(0, keepdim=True) - - semantic_vectors = self.hubert_model.forward(wav_hubert, input_sample_hz=sr) - semantic_tokens = self.quant_model.get_token(semantic_vectors) - wav = convert_audio(wav, sr, self.encodec_model.sample_rate, 1).unsqueeze(0) - - wav = wav.to(self.device) - - with torch.no_grad(): - encoded_frames = self.encodec_model.encode(wav) - - codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1).squeeze() - - codes = codes.cpu() - semantic_tokens = semantic_tokens.cpu() - - wav_name = os.path.split(wav_path)[1] - npz_name = wav_name[:-4] + '.npz' - npz_path = os.path.join(npz_dir, npz_name) - - np.savez( - npz_path, - semantic_prompt=semantic_tokens, - fine_prompt=codes, - coarse_prompt=codes[:2, :] - ) - - return npz_path - - - def read_json_file(self, json_path): - with open(json_path, 'r') as file: - data = json.load(file) - return data - - - def parse_voice_json(self, voice_json, output_dir): - """ - Parse a voice json file, generate the corresponding output json and npz files - Params: - voice_json: path of a json file or List of json nodes - output_dir: output dir for new json and npz files - """ - if isinstance(voice_json, list): - voice_json = voice_json - else: - # If voice_json is a file path (str), read the JSON file - voice_json = self.read_json_file(voice_json) - for item in voice_json: - wav_path = item['wav'] - npz_path = self.extract_acoustic_embed(wav_path=wav_path, npz_dir=output_dir) - item['npz'] = npz_path - del item['wav'] - - output_json = os.path.join(output_dir, 'metadata.json') - - with open(output_json, 'w') as file: - json.dump(voice_json, file, indent=4) - - - - - - - - diff --git a/spaces/Avkash/WhisperUI/whisperui.py b/spaces/Avkash/WhisperUI/whisperui.py deleted file mode 100644 index 30c70f64571ae8d91c4a021af7818ddea05ff83a..0000000000000000000000000000000000000000 --- a/spaces/Avkash/WhisperUI/whisperui.py +++ /dev/null @@ -1,216 +0,0 @@ -import whisper -import gradio as gr -import os -from pytube import YouTube - - -class WhisperModelUI(object): - def __init__(self, ui_obj): - self.name = "Whisper Model Processor UI" - self.description = "This class is designed to build UI for our Whisper Model" - self.ui_obj = ui_obj - self.audio_files_list = ['No content'] - self.whisper_model = whisper.model.Whisper - self.video_store_path = 'data_files' - - def load_content(self, file_list): - video_out_path = os.path.join(os.getcwd(), self.video_store_path) - - self.audio_files_list = [f for f in os.listdir(video_out_path) - if os.path.isfile(video_out_path + "/" + f) - and (f.endswith(".mp4") or f.endswith('mp3'))] - - return gr.Dropdown.update(choices=self.audio_files_list) - - def load_whisper_model(self, model_type): - try: - asr_model = whisper.load_model(model_type.lower()) - self.whisper_model = asr_model - status = "{} Model is loaded successfully".format(model_type) - except: - status = "error in loading {} model".format(model_type) - - return status, str(self.whisper_model) - - def load_youtube_video(self, video_url): - video_out_path = os.path.join(os.getcwd(), self.video_store_path) - yt = YouTube(video_url) - local_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by( - 'resolution').desc().first().download(video_out_path) - return local_video_path - - def get_video_to_text(self, - transcribe_or_decode, - video_list_dropdown_file_name, - language_detect, - translate_or_transcribe - ): - debug_text = "" - try: - video_out_path = os.path.join(os.getcwd(), 'data_files') - video_full_path = os.path.join(video_out_path, video_list_dropdown_file_name) - if not os.path.isfile(video_full_path): - video_text = "Selected video/audio is could not be located.." - else: - video_text = "Bad choice or result.." - if transcribe_or_decode == 'Transcribe': - video_text, debug_text = self.run_asr_with_transcribe(video_full_path, language_detect, - translate_or_transcribe) - elif transcribe_or_decode == 'Decode': - audio = whisper.load_audio(video_full_path) - video_text, debug_text = self.run_asr_with_decode(audio, language_detect, - translate_or_transcribe) - except: - video_text = "Error processing audio..." - return video_text, debug_text - - def run_asr_with_decode(self, audio, language_detect, translate_or_transcribe): - debug_info = "None.." - - if 'encoder' not in dir(self.whisper_model) or 'decoder' not in dir(self.whisper_model): - return "Model is not loaded, please load the model first", debug_info - - if self.whisper_model.encoder is None or self.whisper_model.decoder is None: - return "Model is not loaded, please load the model first", debug_info - - try: - # pad/trim it to fit 30 seconds - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(self.whisper_model.device) - - if language_detect == 'Detect': - # detect the spoken language - _, probs = self.whisper_model.detect_language(mel) - # print(f"Detected language: {max(probs, key=probs.get)}") - - # decode the audio - # mps crash if fp16=False is not used - - task_type = 'transcribe' - if translate_or_transcribe == 'Translate': - task_type = 'translate' - - if language_detect != 'Detect': - options = whisper.DecodingOptions(fp16=False, - language=language_detect, - task=task_type) - else: - options = whisper.DecodingOptions(fp16=False, - task=task_type) - - result = whisper.decode(self.whisper_model, mel, options) - result_text = result.text - debug_info = str(result) - except: - result_text = "Error handing audio to text.." - return result_text, debug_info - - def run_asr_with_transcribe(self, audio_path, language_detect, translate_or_transcribe): - result_text = "Error..." - debug_info = "None.." - - if 'encoder' not in dir(self.whisper_model) or 'decoder' not in dir(self.whisper_model): - return "Model is not loaded, please load the model first", debug_info - - if self.whisper_model.encoder is None or self.whisper_model.decoder is None: - return "Model is not loaded, please load the model first", debug_info - - task_type = 'transcribe' - if translate_or_transcribe == 'Translate': - task_type = 'translate' - - transcribe_options = dict(beam_size=5, best_of=5, - fp16=False, - task=task_type, - without_timestamps=False) - if language_detect != 'Detect': - transcribe_options['language'] = language_detect - - transcription = self.whisper_model.transcribe(audio_path, **transcribe_options) - if transcription is not None: - result_text = transcription['text'] - debug_info = str(transcription) - return result_text, debug_info - - def create_whisper_ui(self): - with self.ui_obj: - gr.Markdown("Whisper ASR Model UI") - with gr.Tabs(): - with gr.TabItem("YouTube to Text"): - with gr.Row(): - with gr.Column(): - asr_model_type = gr.Radio(['Tiny', 'Base', 'Small', 'Medium', 'Large'], - label="Whisper Model Type", - value='Base' - ) - model_status_lbl = gr.Label(label="Model Load Status...") - load_model_btn = gr.Button("Load Whisper Model") - youtube_url = gr.Textbox(label="YouTube URL", - # value="https://www.youtube.com/watch?v=Y2nHd7El8iw" - value="https://www.youtube.com/watch?v=PpH_mi923_A" - ) - youtube_video = gr.Video(label="YouTube Video") - get_video_btn = gr.Button("Load YouTube URL") - with gr.Column(): - video_list_dropdown = gr.Dropdown(self.audio_files_list, label="Saved Videos") - load_video_list_btn = gr.Button("Load All Videos") - transcribe_or_decode = gr.Radio(['Transcribe', 'Decode'], - label="ASR Options", - value='Transcribe' - ) - language_detect = gr.Dropdown(['Detect', 'English', 'Hindi', 'Japanese'], - label="Provide Language or detect") - translate_or_transcribe = gr.Dropdown(['Transcribe', 'Translate'], - label="Set your output task - Translate or Transcribe") - get_video_txt_btn = gr.Button("Convert Video to Text") - video_text = gr.Textbox(label="Video to Text", lines=10) - with gr.TabItem("Debug Info"): - with gr.Row(): - with gr.Column(): - debug_text = gr.Textbox(label="Debug Details", lines=20) - load_model_btn.click( - self.load_whisper_model, - [ - asr_model_type - ], - [ - model_status_lbl, - debug_text - ] - ) - get_video_btn.click( - self.load_youtube_video, - [ - youtube_url - ], - [ - youtube_video - ] - ) - load_video_list_btn.click( - self.load_content, - [ - video_list_dropdown - ], - [ - video_list_dropdown - ] - ) - get_video_txt_btn.click( - self.get_video_to_text, - [ - transcribe_or_decode, - video_list_dropdown, - language_detect, - translate_or_transcribe - ], - [ - video_text, - debug_text - ] - ) - - def launch_ui(self): - self.ui_obj.launch(debug=True) diff --git a/spaces/Banbri/zcvzcv/src/lib/getInitialRenderedScene.ts b/spaces/Banbri/zcvzcv/src/lib/getInitialRenderedScene.ts deleted file mode 100644 index 7c0739bf8bebdaf16aa4acf610eb6bdad9c15fd2..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/lib/getInitialRenderedScene.ts +++ /dev/null @@ -1,11 +0,0 @@ -import { RenderedScene } from "@/types" - -export const getInitialRenderedScene = (): RenderedScene => ({ - renderId: "", - status: "pending", - assetUrl: "", - alt: "", - error: "", - maskUrl: "", - segments: [] -}) \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/julius/lowpass.py b/spaces/Bart92/RVC_HF/julius/lowpass.py deleted file mode 100644 index 0eb46e382b20bfc2d93482f9f027986b863de6f0..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/julius/lowpass.py +++ /dev/null @@ -1,181 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -FIR windowed sinc lowpass filters. -""" - -import math -from typing import Sequence, Optional - -import torch -from torch.nn import functional as F - -from .core import sinc -from .fftconv import fft_conv1d -from .utils import simple_repr - - -class LowPassFilters(torch.nn.Module): - """ - Bank of low pass filters. Note that a high pass or band pass filter can easily - be implemented by substracting a same signal processed with low pass filters with different - frequencies (see `julius.bands.SplitBands` for instance). - This uses a windowed sinc filter, very similar to the one used in - `julius.resample`. However, because we do not change the sample rate here, - this filter can be much more efficiently implemented using the FFT convolution from - `julius.fftconv`. - - Args: - cutoffs (list[float]): list of cutoff frequencies, in [0, 0.5] expressed as `f/f_s` where - f_s is the samplerate and `f` is the cutoff frequency. - The upper limit is 0.5, because a signal sampled at `f_s` contains only - frequencies under `f_s / 2`. - stride (int): how much to decimate the output. Keep in mind that decimation - of the output is only acceptable if the cutoff frequency is under `1/ (2 * stride)` - of the original sampling rate. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. - Controls the receptive field of the Finite Impulse Response filter. - For lowpass filters with low cutoff frequency, e.g. 40Hz at 44.1kHz, - it is a bad idea to set this to a high value. - This is likely appropriate for most use. Lower values - will result in a faster filter, but with a slower attenuation around the - cutoff frequency. - fft (bool or None): if True, uses `julius.fftconv` rather than PyTorch convolutions. - If False, uses PyTorch convolutions. If None, either one will be chosen automatically - depending on the effective filter size. - - - ..warning:: - All the filters will use the same filter size, aligned on the lowest - frequency provided. If you combine a lot of filters with very diverse frequencies, it might - be more efficient to split them over multiple modules with similar frequencies. - - ..note:: - A lowpass with a cutoff frequency of 0 is defined as the null function - by convention here. This allows for a highpass with a cutoff of 0 to - be equal to identity, as defined in `julius.filters.HighPassFilters`. - - Shape: - - - Input: `[*, T]` - - Output: `[F, *, T']`, with `T'=T` if `pad` is True and `stride` is 1, and - `F` is the numer of cutoff frequencies. - - >>> lowpass = LowPassFilters([1/4]) - >>> x = torch.randn(4, 12, 21, 1024) - >>> list(lowpass(x).shape) - [1, 4, 12, 21, 1024] - """ - - def __init__(self, cutoffs: Sequence[float], stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - self.cutoffs = list(cutoffs) - if min(self.cutoffs) < 0: - raise ValueError("Minimum cutoff must be larger than zero.") - if max(self.cutoffs) > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.stride = stride - self.pad = pad - self.zeros = zeros - self.half_size = int(zeros / min([c for c in self.cutoffs if c > 0]) / 2) - if fft is None: - fft = self.half_size > 32 - self.fft = fft - window = torch.hann_window(2 * self.half_size + 1, periodic=False) - time = torch.arange(-self.half_size, self.half_size + 1) - filters = [] - for cutoff in cutoffs: - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * math.pi * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filters.append(filter_) - self.register_buffer("filters", torch.stack(filters)[:, None]) - - def forward(self, input): - shape = list(input.shape) - input = input.view(-1, 1, shape[-1]) - if self.pad: - input = F.pad(input, (self.half_size, self.half_size), mode='replicate') - if self.fft: - out = fft_conv1d(input, self.filters, stride=self.stride) - else: - out = F.conv1d(input, self.filters, stride=self.stride) - shape.insert(0, len(self.cutoffs)) - shape[-1] = out.shape[-1] - return out.permute(1, 0, 2).reshape(shape) - - def __repr__(self): - return simple_repr(self) - - -class LowPassFilter(torch.nn.Module): - """ - Same as `LowPassFilters` but applies a single low pass filter. - - Shape: - - - Input: `[*, T]` - - Output: `[*, T']`, with `T'=T` if `pad` is True and `stride` is 1. - - >>> lowpass = LowPassFilter(1/4, stride=2) - >>> x = torch.randn(4, 124) - >>> list(lowpass(x).shape) - [4, 62] - """ - - def __init__(self, cutoff: float, stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - self._lowpasses = LowPassFilters([cutoff], stride, pad, zeros, fft) - - @property - def cutoff(self): - return self._lowpasses.cutoffs[0] - - @property - def stride(self): - return self._lowpasses.stride - - @property - def pad(self): - return self._lowpasses.pad - - @property - def zeros(self): - return self._lowpasses.zeros - - @property - def fft(self): - return self._lowpasses.fft - - def forward(self, input): - return self._lowpasses(input)[0] - - def __repr__(self): - return simple_repr(self) - - -def lowpass_filters(input: torch.Tensor, cutoffs: Sequence[float], - stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `LowPassFilters`, refer to this class for more information. - """ - return LowPassFilters(cutoffs, stride, pad, zeros, fft).to(input)(input) - - -def lowpass_filter(input: torch.Tensor, cutoff: float, - stride: int = 1, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Same as `lowpass_filters` but with a single cutoff frequency. - Output will not have a dimension inserted in the front. - """ - return lowpass_filters(input, [cutoff], stride, pad, zeros, fft)[0] diff --git a/spaces/Benson/text-generation/Examples/Crimen De Gngster Real Versin Antigua Apkpure.md b/spaces/Benson/text-generation/Examples/Crimen De Gngster Real Versin Antigua Apkpure.md deleted file mode 100644 index ba125d4aa871cec79c10c350b0ecad47713eb2aa..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Crimen De Gngster Real Versin Antigua Apkpure.md +++ /dev/null @@ -1,139 +0,0 @@ - -

Real Gangster Crime Versión antigua APKPure: Una revisión

-

Si eres un fan de los juegos de acción y aventura de mundo abierto, es posible que hayas oído hablar de Real Gangster Crime. Este es un juego sandbox que te permite explorar una ciudad llena de crimen, violencia y caos. Puedes conducir autos, disparar armas, luchar contra enemigos y completar misiones a medida que te conviertes en el gángster más notorio de la ciudad. ¿Pero sabías que también puedes descargar y jugar la versión antigua de Real Gangster Crime usando APKPure? En este artículo, vamos a revisar lo que es Real Gangster Crime, por qué es posible que desee descargar su versión anterior APKPure, y cómo hacerlo.

-

¿Qué es el verdadero crimen de gángsters?

-

Real Gangster Crime es un juego desarrollado por Naxeex Studio y lanzado en 2016. Está disponible para dispositivos Android y se puede descargar desde Google Play Store u otras fuentes. El juego está clasificado 4.1 de 5 estrellas por más de 1 millón de usuarios en Google Play Store.

-

crimen de gángster real versión antigua apkpure


Download Filehttps://bltlly.com/2v6ITx



-

Un juego de sandbox con acción de mundo abierto y aventura

-

Real Gangster Crime es un juego sandbox, lo que significa que puedes deambular libremente por el mundo del juego e interactuar con varios elementos. El mundo del juego es una ciudad ficticia llamada New Vegas, que se inspira en Las Vegas. La ciudad está llena de rascacielos, casinos, hoteles, clubes y otras atracciones. Sin embargo, también está plagada de crimen, corrupción, pandillas y policía. Puedes elegir seguir el modo historia o crear tus propias aventuras en la ciudad.

-

Las principales características y jugabilidad de Real Gangster Crime

-

Las principales características y jugabilidad de Real Gangster Crime incluyen:

-
    -
  • Conducir coches: Usted puede conducir varios vehículos en el juego, tales como coches deportivos, motocicletas, camiones, tanques, helicópteros, e incluso ovnis. También puede personalizar sus vehículos con diferentes colores, pegatinas, armas y mejoras.
  • - -
  • Lucha contra los enemigos: Puedes luchar contra diferentes enemigos en el juego, como gángsters rivales, oficiales de policía, soldados, zombis, alienígenas y robots. También puedes unirte o crear tu propia pandilla y reclutar miembros para ayudarte en tus misiones.
  • -
  • Completar misiones: Puedes completar varias misiones en el juego, como robar bancos, robar coches, asesinar objetivos, escapar de la prisión, destruir edificios y más. También puedes ganar dinero y reputación completando misiones.
  • -
-

¿Por qué descargar Real Gangster Crime versión antigua APKPure?

-

Si te estás preguntando por qué es posible que desee descargar Real Gangster Crime versión antigua APKPure en lugar de la última versión de Google Play Store u otras fuentes, aquí hay algunas razones posibles:

-

Los beneficios de usar APKPure para descargar versiones antiguas de aplicaciones

-

APKPure es un sitio web que proporciona descargas gratuitas y seguras de aplicaciones Android en formato APK. APK significa Android Package Kit, que es un formato de archivo que contiene todos los componentes de una aplicación. Al descargar aplicaciones en formato APK desde APKPure, puedes disfrutar de algunos beneficios como:

-
    -
  • Eludir las restricciones regionales: Algunas aplicaciones pueden no estar disponibles en su país o región debido a varias razones. Al descargar aplicaciones de APKPure, puede acceder a

    las aplicaciones que no están disponibles en su región o acceder a las funciones que están restringidas en su área.

  • -
  • Obtener versiones anteriores de aplicaciones: Algunas aplicaciones pueden actualizar sus características o diseño con el tiempo, que puede no adaptarse a sus preferencias o la compatibilidad del dispositivo. Al descargar aplicaciones de APKPure, puedes elegir la versión que más te guste o que mejor funcione para tu dispositivo.
  • -
  • Ahorro de espacio de almacenamiento: Algunas aplicaciones pueden aumentar su tamaño o requerir datos adicionales después de la actualización, lo que puede ocupar más espacio de almacenamiento en su dispositivo. Al descargar aplicaciones de APKPure, puede obtener la versión que tiene un tamaño más pequeño o menos datos.
  • -
- -

La versión antigua de Real Gangster Crime que se puede descargar de APKPure es la versión 4.6b, que fue lanzado el 19 de marzo de 2020. La última versión de Real Gangster Crime que se puede descargar desde Google Play Store u otras fuentes es la versión 5.5, que fue lanzado el 12 de mayo de 2021. Las diferencias entre las dos versiones incluyen:

- - -Versión antigua -Nueva versión - - -Ninguna característica de cabeza de dragón -función de cabeza de dragón añadido - - -Ninguna función de campo de batalla -Función de campo de batalla añadida - - -Ninguna función de OVNI -función OVNI añadido - - -Ninguna función de helicóptero -función de helicóptero añadido - - -Sin función lanzallamas -Función lanzallamas agregada - - -No hay función de rifle láser -Función de rifle láser agregada - - -Sin función de traje de acero -función de traje de acero añadido - - -Ninguna característica de carreras de coches -Característica de carreras de coches añadido - - -No hay función de hackeo de cajeros automáticos -función de hackeo de cajeros automáticos agregada - - -No hay función de huevos de Pascua -Se agregó la función de huevos de Pascua - - Fuente: - -

Es posible que prefieras la versión antigua de Real Gangster Crime si te gusta el juego más simple y clásico, o si tienes un dispositivo de gama baja que no puede manejar las nuevas características. Es posible que prefieras la nueva versión de Real Gangster Crime si te gusta la jugabilidad más diversa y moderna, o si tienes un dispositivo de alta gama que puede admitir las nuevas características.

-

¿Cómo descargar e instalar la versión antigua de Real Gangster Crime APKPure?

-

Si desea descargar e instalar la versión antigua de Real Gangster Crime APKPure, debe seguir estos pasos:

-

Los pasos para descargar e instalar Real Gangster Crime versión antigua APKPure

-
    -
  1. Ir al sitio web de APKPure y buscar Real Gangster Crime.
  2. - -
  3. Haga clic en el botón "Descargar APK (99.8 MB)" bajo la sección "Versiones antiguas".
  4. -
  5. Espere a que la descarga termine y localice el archivo en su dispositivo.
  6. -
  7. Toque en el archivo y permita la instalación de fuentes desconocidas si se le solicita.
  8. -
  9. Siga las instrucciones en la pantalla y espere a que se complete la instalación.
  10. -
  11. Iniciar la aplicación y disfrutar de jugar Real Gangster Crime versión antigua APKPure.
  12. -
      -

      Los consejos y trucos para disfrutar de Real Gangster Crime versión antigua APKPure

      -

      Para disfrutar jugando Real Gangster Crime versión antigua APKPure, puede utilizar algunos consejos y trucos como:

      -

      -
        -
      • Usa el mapa para encontrar misiones, tiendas, vehículos y enemigos.
      • -
      • Usa la tienda para comprar armas, ropa y mejoras para tu personaje y vehículos.
      • -
      • Utilice el garaje para almacenar y personalizar sus vehículos.
      • -
      • Usa el teléfono para llamar a los miembros de tu pandilla u otros contactos para obtener ayuda o información.
      • -
      • Usa la configuración para ajustar los gráficos, el sonido, los controles y el idioma del juego.
      • -
      • Usa el menú de pausa para guardar, cargar o salir del juego.
      • -
          -

          Conclusión

          - -

          Esperamos que este artículo te haya ayudado a aprender más sobre Real Gangster Crime versión antigua APKPure y cómo descargarlo y jugarlo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Gracias por leer y divertirse jugando Real Gangster Crime versión antigua APKPure!

          -

          Preguntas frecuentes

          -

          Aquí hay algunas preguntas frecuentes sobre Real Gangster Crime versión antigua APKPure:

          -
            -
          1. ¿Es seguro descargar e instalar la versión antigua de Real Gangster Crime APKPure?
          2. -

            Sí, Real Gangster Crime versión antigua APKPure es seguro para descargar e instalar desde el sitio web APKPure. APKPure es una fuente confiable que proporciona descargas gratuitas y seguras de aplicaciones Android en formato APK. Sin embargo, siempre debe tener cuidado al descargar aplicaciones de fuentes desconocidas y escanearlas en busca de virus o malware antes de instalarlas en su dispositivo.

            -
          3. ¿Cuáles son los requisitos del sistema para Real Gangster Crime versión antigua APKPure?
          4. -

            Los requisitos del sistema para Real Gangster Crime versión antigua APKPure son:

            -
              -
            • Android 4.1 o superior
            • -
            • Al menos 100 MB de espacio de almacenamiento libre
            • -
            • Al menos 1 GB de RAM
            • -
            • Una conexión a Internet estable
            • -
            -
          5. ¿Cómo puedo actualizar Real Gangster Crime versión antigua APKPure a la última versión?
          6. -

            Si desea actualizar Real Gangster Crime versión antigua APKPure a la última versión, puede hacerlo siguiendo estos pasos:

            -
              -
            • Ir a Google Play Store y buscar Real Gangster Crime.
            • -
            • Seleccione la aplicación y pulse el botón "Actualizar".
            • -
            • Espere a que la actualización termine y inicie la aplicación.
            • -
            -
          7. ¿Cómo puedo desinstalar Real Gangster Crime versión antigua APKPure desde mi dispositivo?
          8. -

            Si quieres desinstalar Real Gangster Crime versión antigua APKPure desde tu dispositivo, puedes hacerlo siguiendo estos pasos:

            -
              -
            • Ir a la configuración del dispositivo y toque en "Aplicaciones" o "Aplicaciones".
            • -
            • Encuentra y selecciona Real Gangster Crime de la lista de aplicaciones.
            • - -
            -
          9. ¿Dónde puedo encontrar más información sobre Real Gangster Crime versión antigua APKPure?
          10. -

            Si quieres encontrar más información sobre Real Gangster Crime versión antigua APKPure, puedes visitar estas fuentes:

            -
              -
            • El sitio web oficial de Naxeex Studio:
            • -
            • La página oficial de Facebook de Naxeex Studio:
            • -
            • El canal oficial de YouTube de Naxeex Studio:
            • -
            -
              - : https://apkpure.com/real-gangster-crime/com.gta.real.gangster.crime. : https://play.google.com/stores/apps/apps/detail.=id=com.gta.real.gangster.crime&hl=en_US https:/www.youtube./watchv/watch.c=w7Q0mZg9M :tps:na.///s.105468371092348 : https://www.youtube.com/channel/UCoUZGTc5JfwzN8HfyNZwYzg

              64aa2da5cf
              -
              -
              \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis Metro Surfistas Juego Para Windows 7 Softonic.md b/spaces/Benson/text-generation/Examples/Descargar Gratis Metro Surfistas Juego Para Windows 7 Softonic.md deleted file mode 100644 index 76907ef03e4e1662d482c543c228d08564a5bde1..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gratis Metro Surfistas Juego Para Windows 7 Softonic.md +++ /dev/null @@ -1,60 +0,0 @@ - -

              Subway Surfers Juego de descarga gratuita para Windows 7 Softonic

              -

              Si estás buscando un divertido, adictivo y colorido juego de running sin fin que te mantendrá entretenido durante horas, entonces deberías probar Subway Surfers. Este juego es uno de los juegos más populares y descargados en el mundo, con más de mil millones de jugadores. En este juego, te unirás a Jake, Tricky, Fresh y otros personajes geniales mientras huyen del inspector gruñón y su perro en las vías del metro. Tendrás que deslizar hacia la izquierda, derecha, arriba y abajo para esquivar trenes, autobuses, barreras, túneles y otros obstáculos mientras recoges monedas, potenciadores, llaves y otros objetos. También tendrás que completar misiones, desafíos y eventos para ganar recompensas y desbloquear nuevos personajes, tableros, trajes, ubicaciones y más. Subway Surfers es un juego que nunca se vuelve aburrido o repetitivo porque siempre tiene algo nuevo y emocionante que ofrecer.

              -

              descargar gratis metro surfistas juego para windows 7 softonic


              Download Zip ✔✔✔ https://bltlly.com/2v6LfL



              -

              Pero ¿cómo se puede jugar a este increíble juego en su computadora con Windows 7? Bueno, puede descargarlo fácilmente desde Softonic, una de las fuentes de software y aplicaciones más confiables y confiables. Softonic es un sitio web que ofrece descargas gratuitas de varios juegos, programas, herramientas, utilidades y más para diferentes plataformas. Puedes encontrar Subway Surfers en Softonic junto con otros juegos similares como Temple Run, Sonic Dash, Minion Rush, etc. En este artículo, te mostraremos cómo descargar Subway Surfers para Windows 7 Softonic en unos sencillos pasos. También le daremos algunos consejos y trucos sobre cómo jugar Subway Surfers en Windows 7 Softonic como un profesional. ¡Así que vamos a empezar!

              -

              Cómo descargar Subway Surfers para Windows 7 Softonic

              -

              Descargar Subway Surfers para Windows 7 Softonic es muy fácil y rápido. Todo lo que necesita es una conexión a Internet estable y algo de espacio libre en su disco duro. Estos son los pasos que debe seguir:

              -
                -
              1. Visite el Sitio web Softonic ( 1 ) y busque Subway Surfers en la barra de búsqueda.
              2. - -
              3. Ejecute el instalador (archivo.exe) que descargó y siga las instrucciones para completar la instalación. Es posible que deba aceptar los términos y condiciones y elegir algunas opciones como el idioma, la carpeta de destino, los accesos directos, etc.
              4. -
              -

              Felicidades! Usted ha descargado con éxito Subway Surfers para Windows 7 Softonic. Ahora puedes disfrutar de este impresionante juego en tu PC cuando quieras.

              -

              Cómo jugar Subway Surfers en Windows 7 Softonic

              -

              Jugar a Subway Surfers en Windows 7 Softonic es muy fácil y divertido. El juego tiene una interfaz simple e intuitiva que te permite controlar a tu personaje con solo unos pocos gestos. Estos son los pasos que debes seguir:

              -
                -
              1. Iniciar el juego desde el acceso directo del escritorio o el menú de inicio. Verá el menú principal con varias opciones como juego, ajustes, tienda, equipo, etc.
              2. -
              3. Elige tu personaje y tablero entre los que hayas desbloqueado o comprado. También puedes personalizar el atuendo y los accesorios de tu personaje.
              4. -
              5. Comience a correr en las vías del metro haciendo clic en el botón de reproducción. Verás a tu personaje corriendo automáticamente y tendrás que deslizar hacia la izquierda, derecha, arriba y abajo para esquivar obstáculos, recoger monedas y power-ups. También puedes usar las flechas o el ratón para mover a tu personaje.
              6. -
              7. Completa misiones, desafíos y eventos para ganar recompensas y desbloquear nuevos artículos. Puede comprobar su progreso y objetivos haciendo clic en el botón de pausa o el icono de la misión en la esquina superior izquierda de la pantalla.
              8. -
              -

              ¡Eso es todo! Ahora estás listo para jugar a Subway Surfers en Windows 7 Softonic como un profesional. ¡Diviértete y disfruta del juego!

              -

              -

              Consejos y trucos para surfistas de metro en Windows 7 Softonic

              -

              Si quieres mejorar tus habilidades y rendimiento en Subway Surfers en Windows 7 Softonic, puedes seguir algunos consejos y trucos que te ayudarán a obtener puntuaciones más altas, más monedas y más diversión. Estos son algunos de ellos:

              -
                - -
              • Actualiza tus objetos y habilidades para que duren más y sean más eficaces. Puedes actualizar tus potenciadores, tableros, personajes y habilidades gastando monedas o llaves en la tienda. Actualizar aumentará la duración, velocidad, fuerza o frecuencia de tus objetos y habilidades.
              • -
              • Recoger las llaves y utilizarlos para revivir a ti mismo cuando se bloquea o ser atrapado por el inspector. Las llaves son objetos raros y valiosos que pueden salvar tu vida y permitirte continuar tu carrera. Puedes recoger las llaves encontrándolas en las pistas, completando misiones, viendo anuncios o comprándolas con dinero real.
              • -
              • Únete a un equipo o crear su propio y competir con otros jugadores de todo el mundo. Un equipo es un grupo de jugadores que comparten un nombre común, logotipo y objetivo. Puede unirse a un equipo o crear el suyo haciendo clic en el botón de equipo en el menú principal. Al ser parte de un equipo, puedes participar en competiciones semanales, ganar recompensas, chatear con otros miembros y mostrar tus habilidades.
              • -
              -

              Estos son algunos de los consejos y trucos que te ayudarán a jugar Subway Surfers en Windows 7 Softonic mejor. Por supuesto, hay muchas más cosas que puedes descubrir y aprender jugando el juego tú mismo. Así que no dudes en probar cosas nuevas y experimentar con diferentes estrategias.

              -

              Pros y contras de los surfistas de metro en Windows 7 Softonic

              -

              Subway Surfers en Windows 7 Softonic es un gran juego que tiene muchas ventajas y beneficios para sus jugadores. Sin embargo, también tiene algunos inconvenientes y limitaciones que debe tener en cuenta antes de descargarlo. Éstos son algunos de los pros y contras de Subway Surfers en Windows 7 Softonic:

              - -ProsContras -Divertido, adictivo y colorido juego sin fin con gráficos suaves y controlesPuede contener anuncios y compras en la aplicación que pueden ser molestos o tentadores - -Adecuado para todas las edades y niveles de habilidad con varios modos y opcionesPuede volverse repetitivo o aburrido después de un tiempo si no pruebas cosas nuevas o desafíos - -

              Como puedes ver, Subway Surfers en Windows 7 Softonic tiene sus pros y sus contras, pero en general, es un juego que vale la pena probar y jugar. Usted tendrá un montón de diversión y emoción en las vías del metro y escapar del inspector. También disfrutará de los gráficos coloridos, los controles suaves, los diversos personajes, tableros, ubicaciones y potenciadores, y las actualizaciones regulares y nuevas características. Subway Surfers en Windows 7 Softonic es un juego que te hará feliz y entretenido durante horas.

              -

              Conclusión

              -

              En conclusión, Subway Surfers es uno de los mejores y más populares juegos de running sin fin en el mundo. Es un juego que puedes descargar y jugar gratis en tu ordenador Windows 7 desde Softonic, una de las fuentes de software y aplicaciones más confiables y confiables. En este artículo, te mostramos cómo descargar Subway Surfers para Windows 7 Softonic en unos sencillos pasos. También te dimos algunos consejos y trucos sobre cómo jugar a Subway Surfers en Windows 7 Softonic como un profesional. También discutimos los pros y los contras de Subway Surfers en Windows 7 Softonic y por qué debería intentarlo. Esperamos que haya encontrado este artículo útil e informativo. Si lo hizo, por favor compártalo con sus amigos y familiares que podrían estar interesados en jugar Subway Surfers en Windows 7 Softonic. Y si tiene alguna pregunta o comentario, por favor déjelos en la sección de comentarios a continuación. Nos encantaría saber de usted.

              -

              Ahora que sabes todo sobre Subway Surfers en Windows 7 Softonic, ¿qué estás esperando? Sigue adelante y descárgalo de Softonic hoy y comienza a correr en las vías del metro con Jake, Tricky, Fresh y otros personajes interesantes. ¡Diviértete y disfruta del juego!

              -

              Preguntas frecuentes

              - -
                -
              1. ¿Es seguro descargar Subway Surfers desde Softonic?
              2. -

                Sí, Subway Surfers es seguro para descargar desde Softonic. Softonic es un sitio web que ofrece descargas gratuitas de varios juegos, programas, herramientas, utilidades y más para diferentes plataformas. Tiene un estricto sistema de control de calidad que garantiza que todas las descargas estén libres de virus y malware. Puede confiar en Softonic para proporcionarle descargas seguras y confiables de Subway Surfers y otros juegos similares.

                -
              3. ¿Cuánto espacio ocupa Subway Surfers en Windows 7?
              4. -

                Subway Surfers ocupa unos 200 MB de espacio en Windows 7. Sin embargo, esto puede variar dependiendo de las actualizaciones y las nuevas características que se agreguen al juego. Es posible que necesite despejar algo de espacio en su disco duro antes de descargar o instalar Subway Surfers en Windows 7.

                -
              5. ¿Puedo jugar Subway Surfers sin conexión en Windows 7?
              6. -

                Sí, puede jugar Subway Surfers sin conexión en Windows 7. No necesita una conexión a Internet para ejecutar o jugar el juego. Sin embargo, es posible que necesite una conexión a Internet para descargar o actualizar el juego, para acceder a algunas funciones como la tienda o el equipo, o para sincronizar su progreso con su cuenta de Facebook.

                -
              7. ¿Puedo jugar Subway Surfers con un teclado o un ratón en Windows 7?
              8. -

                Sí, puedes jugar a Subway Surfers con un teclado o un ratón en Windows 7. El juego tiene una interfaz sencilla e intuitiva que te permite controlar a tu personaje con solo unos pocos gestos. Puedes usar las flechas o el ratón para mover a tu personaje hacia la izquierda, derecha, arriba y abajo. También puede utilizar la barra espaciadora o el botón izquierdo del ratón para activar los power-ups.

                -
              9. ¿Puedo transferir mi progreso desde mi dispositivo móvil a mi computadora con Windows 7?
              10. - -

              64aa2da5cf
              -
              -
              \ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat/src/routes/conversation/[id]/+server.ts b/spaces/BetterAPI/BetterChat/src/routes/conversation/[id]/+server.ts deleted file mode 100644 index b00a89d06f429f81859f80b761359833e32fbcd6..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/routes/conversation/[id]/+server.ts +++ /dev/null @@ -1,236 +0,0 @@ -import { PUBLIC_SEP_TOKEN } from "$env/static/public"; -import { buildPrompt } from "$lib/buildPrompt.js"; -import { abortedGenerations } from "$lib/server/abortedGenerations.js"; -import { collections } from "$lib/server/database.js"; -import { modelEndpoint } from "$lib/server/modelEndpoint.js"; -import type { Message } from "$lib/types/Message.js"; -import { concatUint8Arrays } from "$lib/utils/concatUint8Arrays.js"; -import { streamToAsyncIterable } from "$lib/utils/streamToAsyncIterable"; -import { trimPrefix } from "$lib/utils/trimPrefix.js"; -import { trimSuffix } from "$lib/utils/trimSuffix.js"; -import type { TextGenerationStreamOutput } from "@huggingface/inference"; -import { error } from "@sveltejs/kit"; -import { ObjectId } from "mongodb"; -import { z } from "zod"; - -export async function POST({ request, fetch, locals, params }) { - // todo: add validation on params.id - const convId = new ObjectId(params.id); - const date = new Date(); - - const conv = await collections.conversations.findOne({ - _id: convId, - sessionId: locals.sessionId, - }); - - if (!conv) { - throw error(404, "Conversation not found"); - } - - const json = await request.json(); - const { - inputs: newPrompt, - options: { id: messageId, is_retry }, - } = z - .object({ - inputs: z.string().trim().min(1), - options: z.object({ - id: z.optional(z.string().uuid()), - is_retry: z.optional(z.boolean()), - }), - }) - .parse(json); - - const messages = (() => { - if (is_retry && messageId) { - let retryMessageIdx = conv.messages.findIndex((message) => message.id === messageId); - if (retryMessageIdx === -1) { - retryMessageIdx = conv.messages.length; - } - return [ - ...conv.messages.slice(0, retryMessageIdx), - { content: newPrompt, from: "user", id: messageId as Message["id"] }, - ]; - } - return [ - ...conv.messages, - { content: newPrompt, from: "user", id: (messageId as Message["id"]) || crypto.randomUUID() }, - ]; - })() satisfies Message[]; - - // Todo: on-the-fly migration, remove later - for (const message of messages) { - if (!message.id) { - message.id = crypto.randomUUID(); - } - } - const prompt = buildPrompt(messages); - - const randomEndpoint = modelEndpoint(); - - const abortController = new AbortController(); - - const resp = await fetch(randomEndpoint.endpoint, { - headers: { - "Content-Type": request.headers.get("Content-Type") ?? "application/json", - Authorization: randomEndpoint.authorization, - }, - method: "POST", - body: JSON.stringify({ - ...json, - inputs: prompt, - }), - signal: abortController.signal, - }); - - const [stream1, stream2] = resp.body!.tee(); - - async function saveMessage() { - let generated_text = await parseGeneratedText(stream2, convId, date, abortController); - - // We could also check if PUBLIC_ASSISTANT_MESSAGE_TOKEN is present and use it to slice the text - if (generated_text.startsWith(prompt)) { - generated_text = generated_text.slice(prompt.length); - } - - generated_text = trimSuffix(trimPrefix(generated_text, "<|startoftext|>"), PUBLIC_SEP_TOKEN); - - messages.push({ from: "assistant", content: generated_text, id: crypto.randomUUID() }); - - await collections.conversations.updateOne( - { - _id: convId, - }, - { - $set: { - messages, - updatedAt: new Date(), - }, - } - ); - } - - saveMessage().catch(console.error); - - // Todo: maybe we should wait for the message to be saved before ending the response - in case of errors - return new Response(stream1, { - headers: Object.fromEntries(resp.headers.entries()), - status: resp.status, - statusText: resp.statusText, - }); -} - -export async function DELETE({ locals, params }) { - const convId = new ObjectId(params.id); - - const conv = await collections.conversations.findOne({ - _id: convId, - sessionId: locals.sessionId, - }); - - if (!conv) { - throw error(404, "Conversation not found"); - } - - await collections.conversations.deleteOne({ _id: conv._id }); - - return new Response(); -} - -async function parseGeneratedText( - stream: ReadableStream, - conversationId: ObjectId, - promptedAt: Date, - abortController: AbortController -): Promise { - const inputs: Uint8Array[] = []; - for await (const input of streamToAsyncIterable(stream)) { - inputs.push(input); - - const date = abortedGenerations.get(conversationId.toString()); - - if (date && date > promptedAt) { - abortController.abort("Cancelled by user"); - const completeInput = concatUint8Arrays(inputs); - - const lines = new TextDecoder() - .decode(completeInput) - .split("\n") - .filter((line) => line.startsWith("data:")); - - const tokens = lines.map((line) => { - try { - const json: TextGenerationStreamOutput = JSON.parse(line.slice("data:".length)); - return json.token.text; - } catch { - return ""; - } - }); - return tokens.join(""); - } - } - - // Merge inputs into a single Uint8Array - const completeInput = concatUint8Arrays(inputs); - - // Get last line starting with "data:" and parse it as JSON to get the generated text - const message = new TextDecoder().decode(completeInput); - - let lastIndex = message.lastIndexOf("\ndata:"); - if (lastIndex === -1) { - lastIndex = message.indexOf("data"); - } - - if (lastIndex === -1) { - console.error("Could not parse in last message"); - } - - let lastMessage = message.slice(lastIndex).trim().slice("data:".length); - if (lastMessage.includes("\n")) { - lastMessage = lastMessage.slice(0, lastMessage.indexOf("\n")); - } - - const lastMessageJSON = JSON.parse(lastMessage); - - if (lastMessageJSON.error) { - throw new Error(lastMessageJSON.error); - } - - const res = lastMessageJSON.generated_text; - - if (typeof res !== "string") { - throw new Error("Could not parse generated text"); - } - - return res; -} - -export async function PATCH({ request, locals, params }) { - const { title } = z - .object({ title: z.string().trim().min(1).max(100) }) - .parse(await request.json()); - - const convId = new ObjectId(params.id); - - const conv = await collections.conversations.findOne({ - _id: convId, - sessionId: locals.sessionId, - }); - - if (!conv) { - throw error(404, "Conversation not found"); - } - - await collections.conversations.updateOne( - { - _id: convId, - }, - { - $set: { - title, - }, - } - ); - - return new Response(); -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langturkishmodel.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langturkishmodel.py deleted file mode 100644 index 291857c25c83f91a151c1d7760e8e5e09c1ee238..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langturkishmodel.py +++ /dev/null @@ -1,4380 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -TURKISH_LANG_MODEL = { - 23: { # 'A' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 37: { # 'B' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 47: { # 'C' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 39: { # 'D' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 0, # 'ş' - }, - 29: { # 'E' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 52: { # 'F' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 1, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 2, # 'ş' - }, - 36: { # 'G' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 2, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 1, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 45: { # 'H' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 2, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 2, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 53: { # 'I' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 60: { # 'J' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 16: { # 'K' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 1, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 49: { # 'L' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 2, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 20: { # 'M' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 0, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 46: { # 'N' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 42: { # 'O' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 2, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 48: { # 'P' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 44: { # 'R' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 35: { # 'S' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 1, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 2, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 31: { # 'T' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 2, # 't' - 14: 2, # 'u' - 32: 1, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 51: { # 'U' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 38: { # 'V' - 23: 1, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 1, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 62: { # 'W' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 43: { # 'Y' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 0, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 1, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 56: { # 'Z' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 1: { # 'a' - 23: 3, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 1, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 21: { # 'b' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 3, # 'g' - 25: 1, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 2, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 28: { # 'c' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 2, # 'T' - 51: 2, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 3, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 1, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 1, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 2, # 'ş' - }, - 12: { # 'd' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 2: { # 'e' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 18: { # 'f' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 1, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 1, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 27: { # 'g' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 25: { # 'h' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 3: { # 'i' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 1, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 3, # 'g' - 25: 1, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 24: { # 'j' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 10: { # 'k' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 2, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 5: { # 'l' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 1, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 2, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 13: { # 'm' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 2, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 4: { # 'n' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 15: { # 'o' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 2, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 2, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 2, # 'ş' - }, - 26: { # 'p' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 7: { # 'r' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 1, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 8: { # 's' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 9: { # 't' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 14: { # 'u' - 23: 3, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 2, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 2, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 32: { # 'v' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 57: { # 'w' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 1, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 58: { # 'x' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 11: { # 'y' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 22: { # 'z' - 23: 2, # 'A' - 37: 2, # 'B' - 47: 1, # 'C' - 39: 2, # 'D' - 29: 3, # 'E' - 52: 1, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 2, # 'N' - 42: 2, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 3, # 'T' - 51: 2, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 1, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 2, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 1, # 'Ş' - 19: 2, # 'ş' - }, - 63: { # '·' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 54: { # 'Ç' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 3, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 50: { # 'Ö' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 2, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 1, # 'N' - 42: 2, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 1, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 1, # 's' - 9: 2, # 't' - 14: 0, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 55: { # 'Ü' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 59: { # 'â' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 0, # 'ş' - }, - 33: { # 'ç' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 0, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 61: { # 'î' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 1, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 34: { # 'ö' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 1, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 3, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 1, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 17: { # 'ü' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 30: { # 'ğ' - 23: 0, # 'A' - 37: 2, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 2, # 'N' - 42: 2, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 3, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 2, # 'İ' - 6: 2, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 41: { # 'İ' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 2, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 6: { # 'ı' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 40: { # 'Ş' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 2, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 0, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 3, # 'f' - 27: 0, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 1, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 1, # 'ü' - 30: 2, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 2, # 'ş' - }, - 19: { # 'ş' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 2, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 1, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -ISO_8859_9_TURKISH_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 255, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 255, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 255, # ' ' - 33: 255, # '!' - 34: 255, # '"' - 35: 255, # '#' - 36: 255, # '$' - 37: 255, # '%' - 38: 255, # '&' - 39: 255, # "'" - 40: 255, # '(' - 41: 255, # ')' - 42: 255, # '*' - 43: 255, # '+' - 44: 255, # ',' - 45: 255, # '-' - 46: 255, # '.' - 47: 255, # '/' - 48: 255, # '0' - 49: 255, # '1' - 50: 255, # '2' - 51: 255, # '3' - 52: 255, # '4' - 53: 255, # '5' - 54: 255, # '6' - 55: 255, # '7' - 56: 255, # '8' - 57: 255, # '9' - 58: 255, # ':' - 59: 255, # ';' - 60: 255, # '<' - 61: 255, # '=' - 62: 255, # '>' - 63: 255, # '?' - 64: 255, # '@' - 65: 23, # 'A' - 66: 37, # 'B' - 67: 47, # 'C' - 68: 39, # 'D' - 69: 29, # 'E' - 70: 52, # 'F' - 71: 36, # 'G' - 72: 45, # 'H' - 73: 53, # 'I' - 74: 60, # 'J' - 75: 16, # 'K' - 76: 49, # 'L' - 77: 20, # 'M' - 78: 46, # 'N' - 79: 42, # 'O' - 80: 48, # 'P' - 81: 69, # 'Q' - 82: 44, # 'R' - 83: 35, # 'S' - 84: 31, # 'T' - 85: 51, # 'U' - 86: 38, # 'V' - 87: 62, # 'W' - 88: 65, # 'X' - 89: 43, # 'Y' - 90: 56, # 'Z' - 91: 255, # '[' - 92: 255, # '\\' - 93: 255, # ']' - 94: 255, # '^' - 95: 255, # '_' - 96: 255, # '`' - 97: 1, # 'a' - 98: 21, # 'b' - 99: 28, # 'c' - 100: 12, # 'd' - 101: 2, # 'e' - 102: 18, # 'f' - 103: 27, # 'g' - 104: 25, # 'h' - 105: 3, # 'i' - 106: 24, # 'j' - 107: 10, # 'k' - 108: 5, # 'l' - 109: 13, # 'm' - 110: 4, # 'n' - 111: 15, # 'o' - 112: 26, # 'p' - 113: 64, # 'q' - 114: 7, # 'r' - 115: 8, # 's' - 116: 9, # 't' - 117: 14, # 'u' - 118: 32, # 'v' - 119: 57, # 'w' - 120: 58, # 'x' - 121: 11, # 'y' - 122: 22, # 'z' - 123: 255, # '{' - 124: 255, # '|' - 125: 255, # '}' - 126: 255, # '~' - 127: 255, # '\x7f' - 128: 180, # '\x80' - 129: 179, # '\x81' - 130: 178, # '\x82' - 131: 177, # '\x83' - 132: 176, # '\x84' - 133: 175, # '\x85' - 134: 174, # '\x86' - 135: 173, # '\x87' - 136: 172, # '\x88' - 137: 171, # '\x89' - 138: 170, # '\x8a' - 139: 169, # '\x8b' - 140: 168, # '\x8c' - 141: 167, # '\x8d' - 142: 166, # '\x8e' - 143: 165, # '\x8f' - 144: 164, # '\x90' - 145: 163, # '\x91' - 146: 162, # '\x92' - 147: 161, # '\x93' - 148: 160, # '\x94' - 149: 159, # '\x95' - 150: 101, # '\x96' - 151: 158, # '\x97' - 152: 157, # '\x98' - 153: 156, # '\x99' - 154: 155, # '\x9a' - 155: 154, # '\x9b' - 156: 153, # '\x9c' - 157: 152, # '\x9d' - 158: 151, # '\x9e' - 159: 106, # '\x9f' - 160: 150, # '\xa0' - 161: 149, # '¡' - 162: 148, # '¢' - 163: 147, # '£' - 164: 146, # '¤' - 165: 145, # '¥' - 166: 144, # '¦' - 167: 100, # '§' - 168: 143, # '¨' - 169: 142, # '©' - 170: 141, # 'ª' - 171: 140, # '«' - 172: 139, # '¬' - 173: 138, # '\xad' - 174: 137, # '®' - 175: 136, # '¯' - 176: 94, # '°' - 177: 80, # '±' - 178: 93, # '²' - 179: 135, # '³' - 180: 105, # '´' - 181: 134, # 'µ' - 182: 133, # '¶' - 183: 63, # '·' - 184: 132, # '¸' - 185: 131, # '¹' - 186: 130, # 'º' - 187: 129, # '»' - 188: 128, # '¼' - 189: 127, # '½' - 190: 126, # '¾' - 191: 125, # '¿' - 192: 124, # 'À' - 193: 104, # 'Á' - 194: 73, # 'Â' - 195: 99, # 'Ã' - 196: 79, # 'Ä' - 197: 85, # 'Å' - 198: 123, # 'Æ' - 199: 54, # 'Ç' - 200: 122, # 'È' - 201: 98, # 'É' - 202: 92, # 'Ê' - 203: 121, # 'Ë' - 204: 120, # 'Ì' - 205: 91, # 'Í' - 206: 103, # 'Î' - 207: 119, # 'Ï' - 208: 68, # 'Ğ' - 209: 118, # 'Ñ' - 210: 117, # 'Ò' - 211: 97, # 'Ó' - 212: 116, # 'Ô' - 213: 115, # 'Õ' - 214: 50, # 'Ö' - 215: 90, # '×' - 216: 114, # 'Ø' - 217: 113, # 'Ù' - 218: 112, # 'Ú' - 219: 111, # 'Û' - 220: 55, # 'Ü' - 221: 41, # 'İ' - 222: 40, # 'Ş' - 223: 86, # 'ß' - 224: 89, # 'à' - 225: 70, # 'á' - 226: 59, # 'â' - 227: 78, # 'ã' - 228: 71, # 'ä' - 229: 82, # 'å' - 230: 88, # 'æ' - 231: 33, # 'ç' - 232: 77, # 'è' - 233: 66, # 'é' - 234: 84, # 'ê' - 235: 83, # 'ë' - 236: 110, # 'ì' - 237: 75, # 'í' - 238: 61, # 'î' - 239: 96, # 'ï' - 240: 30, # 'ğ' - 241: 67, # 'ñ' - 242: 109, # 'ò' - 243: 74, # 'ó' - 244: 87, # 'ô' - 245: 102, # 'õ' - 246: 34, # 'ö' - 247: 95, # '÷' - 248: 81, # 'ø' - 249: 108, # 'ù' - 250: 76, # 'ú' - 251: 72, # 'û' - 252: 17, # 'ü' - 253: 6, # 'ı' - 254: 19, # 'ş' - 255: 107, # 'ÿ' -} - -ISO_8859_9_TURKISH_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-9", - language="Turkish", - char_to_order_map=ISO_8859_9_TURKISH_CHAR_TO_ORDER, - language_model=TURKISH_LANG_MODEL, - typical_positive_ratio=0.97029, - keep_ascii_letters=True, - alphabet="ABCDEFGHIJKLMNOPRSTUVYZabcdefghijklmnoprstuvyzÂÇÎÖÛÜâçîöûüĞğİıŞş", -) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_path.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_path.py deleted file mode 100644 index 3767523b784bb93b5b79890eff359628fcfcaa34..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_path.py +++ /dev/null @@ -1,29 +0,0 @@ -import os -from typing import Union - -_Path = Union[str, os.PathLike] - - -def ensure_directory(path): - """Ensure that the parent directory of `path` exists""" - dirname = os.path.dirname(path) - os.makedirs(dirname, exist_ok=True) - - -def same_path(p1: _Path, p2: _Path) -> bool: - """Differs from os.path.samefile because it does not require paths to exist. - Purely string based (no comparison between i-nodes). - >>> same_path("a/b", "./a/b") - True - >>> same_path("a/b", "a/./b") - True - >>> same_path("a/b", "././a/b") - True - >>> same_path("a/b", "./a/b/c/..") - True - >>> same_path("a/b", "../a/b/c") - False - >>> same_path("a", "a/b") - False - """ - return os.path.normpath(p1) == os.path.normpath(p2) diff --git a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/__init__.py b/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docker/Dockerfile b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docker/Dockerfile deleted file mode 100644 index b8cb3f471eaf48e49759e17fcd4a83a885d96ff8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docker/Dockerfile +++ /dev/null @@ -1,43 +0,0 @@ -FROM nvidia/cuda:10.1-cudnn7-devel - -ENV DEBIAN_FRONTEND noninteractive -RUN apt-get update && apt-get install -y \ - python3-opencv ca-certificates python3-dev git wget sudo && \ - rm -rf /var/lib/apt/lists/* - -# create a non-root user -ARG USER_ID=1000 -RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo -RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers -USER appuser -WORKDIR /home/appuser - -ENV PATH="/home/appuser/.local/bin:${PATH}" -RUN wget https://bootstrap.pypa.io/get-pip.py && \ - python3 get-pip.py --user && \ - rm get-pip.py - -# install dependencies -# See https://pytorch.org/ for other options if you use a different version of CUDA -RUN pip install --user torch torchvision tensorboard cython -RUN pip install --user 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' - -RUN pip install --user 'git+https://github.com/facebookresearch/fvcore' -# install detectron2 -RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo -ENV FORCE_CUDA="1" -# This will build detectron2 for all common cuda architectures and take a lot more time, -# because inside `docker build`, there is no way to tell which architecture will be used. -ENV TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing" -RUN pip install --user -e detectron2_repo - -# Set a fixed model cache directory. -ENV FVCORE_CACHE="/tmp" -WORKDIR /home/appuser/detectron2_repo - -# run it, for example: -# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg -# python3 demo/demo.py \ - #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - #--input input.jpg --output outputs/ \ - #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/util/align.h b/spaces/CVPR/LIVE/thrust/thrust/detail/util/align.h deleted file mode 100644 index af97cd44a7cb86f195b38439dd1a10c111044c85..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/util/align.h +++ /dev/null @@ -1,59 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -#pragma once - -#include - -// functions to handle memory alignment - -namespace thrust -{ -namespace detail -{ -namespace util -{ - - -template -__host__ __device__ -T *align_up(T * ptr, detail::uintptr_t bytes) -{ - return (T *) ( bytes * (((detail::uintptr_t) ptr + (bytes - 1)) / bytes) ); -} - - -template -__host__ __device__ -T *align_down(T * ptr, detail::uintptr_t bytes) -{ - return (T *) ( bytes * (detail::uintptr_t(ptr) / bytes) ); -} - - -template -__host__ __device__ -bool is_aligned(T * ptr, detail::uintptr_t bytes = sizeof(T)) -{ - return detail::uintptr_t(ptr) % bytes == 0; -} - - -} // end namespace util -} // end namespace detail -} // end namespace thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/generate.h b/spaces/CVPR/LIVE/thrust/thrust/generate.h deleted file mode 100644 index a651dd0dccee089f4b31df03000e724fdab13648..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/generate.h +++ /dev/null @@ -1,213 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file generate.h - * \brief Fills a range with values "generated" from a function of no arguments - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup transformations - * \{ - */ - - -/*! \p generate assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,last). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The first element in the range of interest. - * \param last The last element in the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,last). - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable. - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate(thrust::host, v.begin(), v.end(), rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate_n - * \see http://www.sgi.com/tech/stl/generate.html - */ -template -__host__ __device__ - void generate(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - Generator gen); - - -/*! \p generate assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,last). - * - * \param first The first element in the range of interest. - * \param last The last element in the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,last). - * - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable. - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand. - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate(v.begin(), v.end(), rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate_n - * \see http://www.sgi.com/tech/stl/generate.html - */ -template - void generate(ForwardIterator first, - ForwardIterator last, - Generator gen); - - -/*! \p generate_n assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,first + n). The return value is first + n. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The first element in the range of interest. - * \param n The size of the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,first + n). - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam Size is an integral type (either signed or unsigned). - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to a type in \p OutputIterator's set of \c value_types. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate_n(thrust::host, v.begin(), 10, rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate - * \see http://www.sgi.com/tech/stl/generate.html - */ -template -__host__ __device__ - OutputIterator generate_n(const thrust::detail::execution_policy_base &exec, - OutputIterator first, - Size n, - Generator gen); - - -/*! \p generate_n assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,first + n). The return value is first + n. - * - * \param first The first element in the range of interest. - * \param n The size of the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,first + n). - * - * \tparam OutputIterator is a model of Output Iterator. - * \tparam Size is an integral type (either signed or unsigned). - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to a type in \p OutputIterator's set of \c value_types. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand. - * - * \code - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate_n(v.begin(), 10, rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate - * \see http://www.sgi.com/tech/stl/generate.html - */ -template - OutputIterator generate_n(OutputIterator first, - Size n, - Generator gen); - - -/*! \} // end transformations - */ - -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/match_cost.py b/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/match_cost.py deleted file mode 100644 index 38869737d66064ee5adea4b2c8ff26ae091e5f56..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/match_cost.py +++ /dev/null @@ -1,184 +0,0 @@ -import torch - -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.core.bbox.transforms import bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh -from .builder import MATCH_COST - - -@MATCH_COST.register_module() -class BBoxL1Cost(object): - """BBoxL1Cost. - - Args: - weight (int | float, optional): loss_weight - box_format (str, optional): 'xyxy' for DETR, 'xywh' for Sparse_RCNN - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import BBoxL1Cost - >>> import torch - >>> self = BBoxL1Cost() - >>> bbox_pred = torch.rand(1, 4) - >>> gt_bboxes= torch.FloatTensor([[0, 0, 2, 4], [1, 2, 3, 4]]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(bbox_pred, gt_bboxes, factor) - tensor([[1.6172, 1.6422]]) - """ - - def __init__(self, weight=1., box_format='xyxy'): - self.weight = weight - assert box_format in ['xyxy', 'xywh'] - self.box_format = box_format - - def __call__(self, bbox_pred, gt_bboxes): - """ - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - [num_query, 4]. - gt_bboxes (Tensor): Ground truth boxes with normalized - coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. - - Returns: - torch.Tensor: bbox_cost value with weight - """ - if self.box_format == 'xywh': - gt_bboxes = bbox_xyxy_to_cxcywh(gt_bboxes) - elif self.box_format == 'xyxy': - bbox_pred = bbox_cxcywh_to_xyxy(bbox_pred) - bbox_cost = torch.cdist(bbox_pred, gt_bboxes, p=1) - return bbox_cost * self.weight - - -@MATCH_COST.register_module() -class FocalLossCost(object): - """FocalLossCost. - - Args: - weight (int | float, optional): loss_weight - alpha (int | float, optional): focal_loss alpha - gamma (int | float, optional): focal_loss gamma - eps (float, optional): default 1e-12 - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import FocalLossCost - >>> import torch - >>> self = FocalLossCost() - >>> cls_pred = torch.rand(4, 3) - >>> gt_labels = torch.tensor([0, 1, 2]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(cls_pred, gt_labels) - tensor([[-0.3236, -0.3364, -0.2699], - [-0.3439, -0.3209, -0.4807], - [-0.4099, -0.3795, -0.2929], - [-0.1950, -0.1207, -0.2626]]) - """ - - def __init__(self, weight=1., alpha=0.25, gamma=2, eps=1e-12): - self.weight = weight - self.alpha = alpha - self.gamma = gamma - self.eps = eps - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits, shape - [num_query, num_class]. - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - - Returns: - torch.Tensor: cls_cost value with weight - """ - cls_pred = cls_pred.sigmoid() - neg_cost = -(1 - cls_pred + self.eps).log() * ( - 1 - self.alpha) * cls_pred.pow(self.gamma) - pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( - 1 - cls_pred).pow(self.gamma) - cls_cost = pos_cost[:, gt_labels] - neg_cost[:, gt_labels] - return cls_cost * self.weight - - -@MATCH_COST.register_module() -class ClassificationCost(object): - """ClsSoftmaxCost. - - Args: - weight (int | float, optional): loss_weight - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import \ - ... ClassificationCost - >>> import torch - >>> self = ClassificationCost() - >>> cls_pred = torch.rand(4, 3) - >>> gt_labels = torch.tensor([0, 1, 2]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(cls_pred, gt_labels) - tensor([[-0.3430, -0.3525, -0.3045], - [-0.3077, -0.2931, -0.3992], - [-0.3664, -0.3455, -0.2881], - [-0.3343, -0.2701, -0.3956]]) - """ - - def __init__(self, weight=1.): - self.weight = weight - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits, shape - [num_query, num_class]. - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - - Returns: - torch.Tensor: cls_cost value with weight - """ - # Following the official DETR repo, contrary to the loss that - # NLL is used, we approximate it in 1 - cls_score[gt_label]. - # The 1 is a constant that doesn't change the matching, - # so it can be omitted. - cls_score = cls_pred.softmax(-1) - cls_cost = -cls_score[:, gt_labels] - return cls_cost * self.weight - - -@MATCH_COST.register_module() -class IoUCost(object): - """IoUCost. - - Args: - iou_mode (str, optional): iou mode such as 'iou' | 'giou' - weight (int | float, optional): loss weight - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import IoUCost - >>> import torch - >>> self = IoUCost() - >>> bboxes = torch.FloatTensor([[1,1, 2, 2], [2, 2, 3, 4]]) - >>> gt_bboxes = torch.FloatTensor([[0, 0, 2, 4], [1, 2, 3, 4]]) - >>> self(bboxes, gt_bboxes) - tensor([[-0.1250, 0.1667], - [ 0.1667, -0.5000]]) - """ - - def __init__(self, iou_mode='giou', weight=1.): - self.weight = weight - self.iou_mode = iou_mode - - def __call__(self, bboxes, gt_bboxes): - """ - Args: - bboxes (Tensor): Predicted boxes with unnormalized coordinates - (x1, y1, x2, y2). Shape [num_query, 4]. - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. - - Returns: - torch.Tensor: iou_cost value with weight - """ - # overlaps: [num_bboxes, num_gt] - overlaps = bbox_overlaps( - bboxes, gt_bboxes, mode=self.iou_mode, is_aligned=False) - # The 1 is a constant that doesn't change the matching, so omitted. - iou_cost = -overlaps - return iou_cost * self.weight diff --git a/spaces/CofAI/urlcut/index.html b/spaces/CofAI/urlcut/index.html deleted file mode 100644 index 7501e3fc759ca4490a051209aca4fa93893e14c8..0000000000000000000000000000000000000000 --- a/spaces/CofAI/urlcut/index.html +++ /dev/null @@ -1,49 +0,0 @@ - - - - CofURL.cut -

              - - -

              CofURLcut

              - -

              - -

              - -
              -

              - -

              - -

              - -

              - Link shortener by CofAI - - diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py deleted file mode 100644 index 93452b64696dc9b2cd2a347b8051729864bf9510..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from torch.utils.data.sampler import BatchSampler - - -class IterationBasedBatchSampler(BatchSampler): - """ - Wraps a BatchSampler, resampling from it until - a specified number of iterations have been sampled - """ - - def __init__(self, batch_sampler, num_iterations, start_iter=0): - self.batch_sampler = batch_sampler - self.num_iterations = num_iterations - self.start_iter = start_iter - - def __iter__(self): - iteration = self.start_iter - while iteration <= self.num_iterations: - # if the underlying sampler has a set_epoch method, like - # DistributedSampler, used for making each process see - # a different split of the dataset, then set it - if hasattr(self.batch_sampler.sampler, "set_epoch"): - self.batch_sampler.sampler.set_epoch(iteration) - for batch in self.batch_sampler: - iteration += 1 - if iteration > self.num_iterations: - break - yield batch - - def __len__(self): - return self.num_iterations diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/_binary.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/_binary.py deleted file mode 100644 index a74ee9eb6f341aca9e074c0acc4b306a354175a0..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/_binary.py +++ /dev/null @@ -1,102 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Binary input/output support routines. -# -# Copyright (c) 1997-2003 by Secret Labs AB -# Copyright (c) 1995-2003 by Fredrik Lundh -# Copyright (c) 2012 by Brian Crowell -# -# See the README file for information on usage and redistribution. -# - - -"""Binary input/output support routines.""" - - -from struct import pack, unpack_from - - -def i8(c): - return c if c.__class__ is int else c[0] - - -def o8(i): - return bytes((i & 255,)) - - -# Input, le = little endian, be = big endian -def i16le(c, o=0): - """ - Converts a 2-bytes (16 bits) string to an unsigned integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from("h", c, o)[0] - - -def i32le(c, o=0): - """ - Converts a 4-bytes (32 bits) string to an unsigned integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from("H", c, o)[0] - - -def i32be(c, o=0): - return unpack_from(">I", c, o)[0] - - -# Output, le = little endian, be = big endian -def o16le(i): - return pack("H", i) - - -def o32be(i): - return pack(">I", i) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/tcp_helpers.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/tcp_helpers.py deleted file mode 100644 index 88b244223741ad2decb6cb612eae644fae88b2b2..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/tcp_helpers.py +++ /dev/null @@ -1,37 +0,0 @@ -"""Helper methods to tune a TCP connection""" - -import asyncio -import socket -from contextlib import suppress -from typing import Optional # noqa - -__all__ = ("tcp_keepalive", "tcp_nodelay") - - -if hasattr(socket, "SO_KEEPALIVE"): - - def tcp_keepalive(transport: asyncio.Transport) -> None: - sock = transport.get_extra_info("socket") - if sock is not None: - sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) - -else: - - def tcp_keepalive(transport: asyncio.Transport) -> None: # pragma: no cover - pass - - -def tcp_nodelay(transport: asyncio.Transport, value: bool) -> None: - sock = transport.get_extra_info("socket") - - if sock is None: - return - - if sock.family not in (socket.AF_INET, socket.AF_INET6): - return - - value = bool(value) - - # socket may be closed already, on windows OSError get raised - with suppress(OSError): - sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, value) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/BitmapGlyphMetrics.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/BitmapGlyphMetrics.py deleted file mode 100644 index 10b4f828213b8320d54eefed3d5e66f2ba532101..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/BitmapGlyphMetrics.py +++ /dev/null @@ -1,64 +0,0 @@ -# Since bitmap glyph metrics are shared between EBLC and EBDT -# this class gets its own python file. -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -import logging - - -log = logging.getLogger(__name__) - -bigGlyphMetricsFormat = """ - > # big endian - height: B - width: B - horiBearingX: b - horiBearingY: b - horiAdvance: B - vertBearingX: b - vertBearingY: b - vertAdvance: B -""" - -smallGlyphMetricsFormat = """ - > # big endian - height: B - width: B - BearingX: b - BearingY: b - Advance: B -""" - - -class BitmapGlyphMetrics(object): - def toXML(self, writer, ttFont): - writer.begintag(self.__class__.__name__) - writer.newline() - for metricName in sstruct.getformat(self.__class__.binaryFormat)[1]: - writer.simpletag(metricName, value=getattr(self, metricName)) - writer.newline() - writer.endtag(self.__class__.__name__) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - metricNames = set(sstruct.getformat(self.__class__.binaryFormat)[1]) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - # Make sure this is a metric that is needed by GlyphMetrics. - if name in metricNames: - vars(self)[name] = safeEval(attrs["value"]) - else: - log.warning( - "unknown name '%s' being ignored in %s.", - name, - self.__class__.__name__, - ) - - -class BigGlyphMetrics(BitmapGlyphMetrics): - binaryFormat = bigGlyphMetricsFormat - - -class SmallGlyphMetrics(BitmapGlyphMetrics): - binaryFormat = smallGlyphMetricsFormat diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Empty-585389a4.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Empty-585389a4.js deleted file mode 100644 index e2aaf38ddd51bb819a24a52a8ba0f4eabe3a3eb4..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Empty-585389a4.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as h,e as b,s as z,a9 as v,N as f,K as g,U as u,p as k,M as C,ab as E,ac as B,ad as R,z as S,v as q,A,h as K}from"./index-3370be2a.js";import"./Button-89624748.js";function M(n){let e,o,a;const _=n[5].default,s=v(_,n,n[4],null);return{c(){e=f("div"),o=f("div"),s&&s.c(),g(o,"class","icon svelte-lk9eg8"),g(e,"class","empty svelte-lk9eg8"),u(e,"small",n[0]==="small"),u(e,"large",n[0]==="large"),u(e,"unpadded_box",n[1]),u(e,"small_parent",n[3])},m(t,i){k(t,e,i),C(e,o),s&&s.m(o,null),n[6](e),a=!0},p(t,[i]){s&&s.p&&(!a||i&16)&&E(s,_,t,t[4],a?R(_,t[4],i,null):B(t[4]),null),(!a||i&1)&&u(e,"small",t[0]==="small"),(!a||i&1)&&u(e,"large",t[0]==="large"),(!a||i&2)&&u(e,"unpadded_box",t[1]),(!a||i&8)&&u(e,"small_parent",t[3])},i(t){a||(S(s,t),a=!0)},o(t){q(s,t),a=!1},d(t){t&&A(e),s&&s.d(t),n[6](null)}}}function N(n,e,o){let a,{$$slots:_={},$$scope:s}=e,{size:t="small"}=e,{unpadded_box:i=!1}=e,d;function m(l){if(!l)return;const{height:r}=l.getBoundingClientRect(),{height:c}=l.parentElement?.getBoundingClientRect()||{height:r};return r>c+2}function p(l){K[l?"unshift":"push"](()=>{d=l,o(2,d)})}return n.$$set=l=>{"size"in l&&o(0,t=l.size),"unpadded_box"in l&&o(1,i=l.unpadded_box),"$$scope"in l&&o(4,s=l.$$scope)},n.$$.update=()=>{n.$$.dirty&4&&o(3,a=m(d))},[t,i,d,a,s,_,p]}class w extends h{constructor(e){super(),b(this,e,N,M,z,{size:0,unpadded_box:1})}}export{w as E}; -//# sourceMappingURL=Empty-585389a4.js.map diff --git a/spaces/Dagfinn1962/stablediffusion-articlera/app.py b/spaces/Dagfinn1962/stablediffusion-articlera/app.py deleted file mode 100644 index ae1272f9e867899df92b59d765dca2ba11205d1d..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/stablediffusion-articlera/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import numpy as np -from PIL import Image - - - -models = [ - {"name": "❤ STABLE DIFFUSION MODELS ==========", "url": "stabilityai/stable-diffusion-2-1"}, - {"name": "SD ComVis 1.2","url": "CompVis/stable-diffusion-v1-2"}, - {"name": "SD Comvis 1.4","url": "CompVis/stable-diffusion-v1-4"}, - {"name": "SD runawayml 1.5","url": "runwayml/stable-diffusion-v1-5"}, - {"name": "SD stable-diffusion xl base 1.0","url": "stabilityai/stable-diffusion-xl-base-1.0"}, - {"name": "SD Dreamshaper-Anime","url": "Lykon/DreamShaper"}, - {"name": "Dreamlike Anime","url": "dreamlike-art/dreamlike-photoreal-2.0"}, - {"name": "❤ REALISTIC PHOTO MODELS ==========", "url": "dreamlike-art/dreamlike-photoreal-2.0"}, - {"name": "AmiIReal", "url": "stablediffusionapi/amireal"}, - {"name": "Analog Diffusion", "url": "wavymulder/Analog-Diffusion"}, - {"name": "Circulus 2.8", "url": "circulus/sd-photoreal-v2.8"}, - {"name": "UltraSkin", "url": "VegaKH/Ultraskin"}, - {"name": "Wavyfusion", "url": "wavymulder/wavyfusion"}, - {"name": "❤ SEMI-REALISTIC MODELS ==========", "url": "stablediffusionapi/all-526"}, - {"name": "All 526", "url": "stablediffusionapi/all-526"}, - {"name": "All 526 animated", "url": "stablediffusionapi/all-526-animated"}, - {"name": "Circulus Semi Real 2", "url": "circulus/sd-photoreal-semi-v2"}, - {"name": "Semi Real Mix", "url": "robotjung/SemiRealMix"}, - {"name": "SpyBG", "url": "stablediffusionapi/spybg"}, - {"name": "❤ STABLE DIFFUSION MODELS ==========", "url": "stabilityai/stable-diffusion-2-1"}, - {"name": "Stable Diffusion 1.4","url": "CompVis/stable-diffusion-v1-4"}, - {"name": "Stable Diffusion 1.5","url": "runwayml/stable-diffusion-v1-5"}, - {"name": "Stable Diffusion 2.1","url": "stabilityai/stable-diffusion-2-1"}, - {"name": "Stable Diffusion 2.1 Base","url": "stabilityai/stable-diffusion-2-1-base"}, - {"name": "Stable Diffusion 2.1 Unclip","url": "stabilityai/stable-diffusion-2-1-unclip"}, - {"name": "❤ SCI FI MODELS ==========", "url": "nitrosocke/Future-Diffusion"}, - {"name": "Future Diffusion", "url": "nitrosocke/Future-Diffusion"}, - {"name": "JWST Deep Space Diffusion", "url": "dallinmackay/JWST-Deep-Space-diffusion"}, - {"name": "Robo Diffusion 3 Base", "url": "nousr/robo-diffusion-2-base"}, - {"name": "Robo Diffusion", "url": "nousr/robo-diffusion"}, - {"name": "Tron Legacy Diffusion", "url": "dallinmackay/Tron-Legacy-diffusion"}, - {"name": "❤ 3D ART MODELS ==========", "url": "DucHaiten/DucHaitenAIart"}, - {"name": "DucHaiten Art", "url": "DucHaiten/DucHaitenAIart"}, - {"name": "DucHaiten ClassicAnime", "url": "DucHaiten/DH_ClassicAnime"}, - {"name": "DucHaiten DreamWorld", "url": "DucHaiten/DucHaitenDreamWorld"}, - {"name": "DucHaiten Journey", "url": "DucHaiten/DucHaitenJourney"}, - {"name": "DucHaiten StyleLikeMe", "url": "DucHaiten/DucHaiten-StyleLikeMe"}, - {"name": "DucHaiten SuperCute", "url": "DucHaiten/DucHaitenSuperCute"}, - {"name": "Redshift Diffusion 768", "url": "nitrosocke/redshift-diffusion-768"}, - {"name": "Redshift Diffusion", "url": "nitrosocke/redshift-diffusion"}, - - ] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/daspartho/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks (css ='main.css') as myface: - - #gr.HTML("
              Your Promt Here
              Choose model here
              " ) - with gr.Row(): - input_text = gr.Textbox(label=" ",placeholder="1.PROMPT IDEA HERE ! ",lines=4) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="2. Choose Model", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - - - ) - with gr.Row(): - see_prompts = gr.Button(" 3. GENERATE YOUR PROMT IDEA HERE!") - run = gr.Button("4. GENERATE THE IMAGE HERE!", varant="primery") - - # - with gr.Row(): - output1 = gr.Image(label="Generated Image") - output2 = gr.Image(label="Generated Image") - output3 = gr.Image(label="Generated Image") - with gr.Row(): - magic1 = gr.Textbox(label="Generated Prompt", lines=2) - magic2 = gr.Textbox(label="Generated Prompt", lines=2) - magic3 = gr.Textbox(label="Generated Prompt", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3,]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - -title="Daylight (SD) ", -article="", -myface.queue(concurrency_count=200) -myface.launch(inline=True, max_threads=400) \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/app/interface/top-menu/index.tsx b/spaces/Detomo/ai-comic-generation/src/app/interface/top-menu/index.tsx deleted file mode 100644 index 375a259f57cbff88b393a85b9e12280ef6747d29..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/app/interface/top-menu/index.tsx +++ /dev/null @@ -1,259 +0,0 @@ -"use client" - -import { useEffect, useState } from "react" -import { useSearchParams } from "next/navigation" -import Image from "next/image" - -import { - Select, - SelectContent, - SelectItem, - SelectTrigger, - SelectValue, -} from "@/components/ui/select" -import { Label } from "@/components/ui/label" -import { cn } from "@/lib/utils" -import { FontName, defaultFont } from "@/lib/fonts" -import { Input } from "@/components/ui/input" -import { PresetName, defaultPreset, nonRandomPresets, presets } from "@/app/engine/presets" -import { useStore } from "@/app/store" -import { Button } from "@/components/ui/button" -import { LayoutName, allLayoutLabels, defaultLayout, nonRandomLayouts } from "@/app/layouts" - -import layoutPreview0 from "../../../../public/layouts/layout0.jpg" -import layoutPreview1 from "../../../../public/layouts/layout1.jpg" -import layoutPreview2 from "../../../../public/layouts/layout2.jpg" -import layoutPreview3 from "../../../../public/layouts/layout3.jpg" -import { StaticImageData } from "next/image" -import { Switch } from "@/components/ui/switch" - -const layoutIcons: Partial> = { - Layout0: layoutPreview0, - Layout1: layoutPreview1, - Layout2: layoutPreview2, - Layout3: layoutPreview3 -} - -export function TopMenu() { - // const font = useStore(state => state.font) - // const setFont = useStore(state => state.setFont) - const preset = useStore(state => state.preset) - const prompt = useStore(state => state.prompt) - const layout = useStore(state => state.layout) - const setLayout = useStore(state => state.setLayout) - - const setShowCaptions = useStore(state => state.setShowCaptions) - const showCaptions = useStore(state => state.showCaptions) - - const generate = useStore(state => state.generate) - - const isGeneratingStory = useStore(state => state.isGeneratingStory) - const atLeastOnePanelIsBusy = useStore(state => state.atLeastOnePanelIsBusy) - const isBusy = isGeneratingStory || atLeastOnePanelIsBusy - - const searchParams = useSearchParams() - - const requestedPreset = (searchParams.get('preset') as PresetName) || defaultPreset - const requestedFont = (searchParams.get('font') as FontName) || defaultFont - const requestedPrompt = (searchParams.get('prompt') as string) || "" - const requestedLayout = (searchParams.get('layout') as LayoutName) || defaultLayout - - const [draftPrompt, setDraftPrompt] = useState(requestedPrompt) - const [draftPreset, setDraftPreset] = useState(requestedPreset) - const [draftLayout, setDraftLayout] = useState(requestedLayout) - - const handleSubmit = () => { - const promptChanged = draftPrompt.trim() !== prompt.trim() - const presetChanged = draftPreset !== preset.id - const layoutChanged = draftLayout !== layout - if (!isBusy && (promptChanged || presetChanged || layoutChanged)) { - generate(draftPrompt, draftPreset, draftLayout) - } - } - - useEffect(() => { - const layoutChanged = draftLayout !== layout - if (layoutChanged && !isBusy) { - setLayout(draftLayout) - } - }, [layout, draftLayout, isBusy]) - - return ( -
              -
              -
              - - {/* */} - - -
              -
              - - {/* */} - - -
              -
              - - -
              - {/* -
              - - -
              - */} -
              -
              -
              - { - setDraftPrompt(e.target.value) - }} - onKeyDown={({ key }) => { - if (key === 'Enter') { - handleSubmit() - } - }} - value={draftPrompt} - /> - -
              -
              - {/* - Let's add this feature later, because right now people - are confused about why they can't activate it -
              - - -
              - */} -
              - ) -} \ No newline at end of file diff --git a/spaces/Dify-AI/Baichuan2-13B-Chat/style.css b/spaces/Dify-AI/Baichuan2-13B-Chat/style.css deleted file mode 100644 index 303c3d7ef3b06c42b211797cd2d5af9800589092..0000000000000000000000000000000000000000 --- a/spaces/Dify-AI/Baichuan2-13B-Chat/style.css +++ /dev/null @@ -1,16 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: white; - background: #1565c0; - border-radius: 100vh; -} - -#component-0 { - max-width: 900px; - margin: auto; - padding-top: 1.5rem; -} diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/sampler.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/sampler.py deleted file mode 100644 index 72f1b46da117403c7f6ddcc1877bd9d70ded962b..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/sampler.py +++ /dev/null @@ -1,134 +0,0 @@ -''' -A sampler is just a list of integer listing the indexes of the -inputs in a data set to sample. For reproducibility, the -FixedRandomSubsetSampler uses a seeded prng to produce the same -sequence always. FixedSubsetSampler is just a wrapper for an -explicit list of integers. - -coordinate_sample solves another sampling problem: when testing -convolutional outputs, we can reduce data explosing by sampling -random points of the feature map rather than the entire feature map. -coordinate_sample does this in a deterministic way that is also -resolution-independent. -''' - -import numpy -import random -from torch.utils.data.sampler import Sampler - -class FixedSubsetSampler(Sampler): - """Represents a fixed sequence of data set indices. - Subsets can be created by specifying a subset of output indexes. - """ - def __init__(self, samples): - self.samples = samples - - def __iter__(self): - return iter(self.samples) - - def __len__(self): - return len(self.samples) - - def __getitem__(self, key): - return self.samples[key] - - def subset(self, new_subset): - return FixedSubsetSampler(self.dereference(new_subset)) - - def dereference(self, indices): - ''' - Translate output sample indices (small numbers indexing the sample) - to input sample indices (larger number indexing the original full set) - ''' - return [self.samples[i] for i in indices] - - -class FixedRandomSubsetSampler(FixedSubsetSampler): - """Samples a fixed number of samples from the dataset, deterministically. - Arguments: - data_source, - sample_size, - seed (optional) - """ - def __init__(self, data_source, start=None, end=None, seed=1): - rng = random.Random(seed) - shuffled = list(range(len(data_source))) - rng.shuffle(shuffled) - self.data_source = data_source - super(FixedRandomSubsetSampler, self).__init__(shuffled[start:end]) - - def class_subset(self, class_filter): - ''' - Returns only the subset matching the given rule. - ''' - if isinstance(class_filter, int): - rule = lambda d: d[1] == class_filter - else: - rule = class_filter - return self.subset([i for i, j in enumerate(self.samples) - if rule(self.data_source[j])]) - -def coordinate_sample(shape, sample_size, seeds, grid=13, seed=1, flat=False): - ''' - Returns a (end-start) sets of sample_size grid points within - the shape given. If the shape dimensions are a multiple of 'grid', - then sampled points within the same row will never be duplicated. - ''' - if flat: - sampind = numpy.zeros((len(seeds), sample_size), dtype=int) - else: - sampind = numpy.zeros((len(seeds), 2, sample_size), dtype=int) - assert sample_size <= grid - for j, seed in enumerate(seeds): - rng = numpy.random.RandomState(seed) - # Shuffle the 169 random grid squares, and pick :sample_size. - square_count = grid ** len(shape) - square = numpy.stack(numpy.unravel_index( - rng.choice(square_count, square_count)[:sample_size], - (grid,) * len(shape))) - # Then add a random offset to each x, y and put in the range [0...1) - # Notice this selects the same locations regardless of resolution. - uniform = (square + rng.uniform(size=square.shape)) / grid - # TODO: support affine scaling so that we can align receptive field - # centers exactly when sampling neurons in different layers. - coords = (uniform * numpy.array(shape)[:,None]).astype(int) - # Now take sample_size without replacement. We do this in a way - # such that if sample_size is decreased or increased up to 'grid', - # the selected points become a subset, not totally different points. - if flat: - sampind[j] = numpy.ravel_multi_index(coords, dims=shape) - else: - sampind[j] = coords - return sampind - -if __name__ == '__main__': - from numpy.testing import assert_almost_equal - # Test that coordinate_sample is deterministic, in-range, and scalable. - assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102)), - [[[14, 0, 12, 11, 8, 13, 11, 20, 7, 20], - [ 9, 22, 7, 11, 23, 18, 21, 15, 2, 5]]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 102)), - [[[ 7, 0, 6, 5, 4, 6, 5, 10, 3, 20 // 2], - [ 4, 11, 3, 5, 11, 9, 10, 7, 1, 5 // 2]]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(100, 102), - flat=True), - [[ 8, 24, 67, 103, 87, 79, 138, 94, 98, 53], - [ 95, 11, 81, 70, 63, 87, 75, 137, 40, 2+10*13]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 103), - flat=True), - [[ 95, 11, 81, 70, 63, 87, 75, 137, 40, 132], - [ 0, 78, 114, 111, 66, 45, 72, 73, 79, 135]]) - assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102), - flat=True), - [[373, 22, 319, 297, 231, 356, 307, 535, 184, 5+20*26]]) - # Test FixedRandomSubsetSampler - fss = FixedRandomSubsetSampler(range(10)) - assert len(fss) == 10 - assert_almost_equal(list(fss), [8, 0, 3, 4, 5, 2, 9, 6, 7, 1]) - fss = FixedRandomSubsetSampler(range(10), 3, 8) - assert len(fss) == 5 - assert_almost_equal(list(fss), [4, 5, 2, 9, 6]) - fss = FixedRandomSubsetSampler([(i, i % 3) for i in range(10)], - class_filter=1) - assert len(fss) == 3 - assert_almost_equal(list(fss), [4, 7, 1]) diff --git a/spaces/DragGan/DragGan/torch_utils/ops/filtered_lrelu.h b/spaces/DragGan/DragGan/torch_utils/ops/filtered_lrelu.h deleted file mode 100644 index 2c403e3f275f472315662321cad54dd0dbc56d00..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/torch_utils/ops/filtered_lrelu.h +++ /dev/null @@ -1,90 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct filtered_lrelu_kernel_params -{ - // These parameters decide which kernel to use. - int up; // upsampling ratio (1, 2, 4) - int down; // downsampling ratio (1, 2, 4) - int2 fuShape; // [size, 1] | [size, size] - int2 fdShape; // [size, 1] | [size, size] - - int _dummy; // Alignment. - - // Rest of the parameters. - const void* x; // Input tensor. - void* y; // Output tensor. - const void* b; // Bias tensor. - unsigned char* s; // Sign tensor in/out. NULL if unused. - const float* fu; // Upsampling filter. - const float* fd; // Downsampling filter. - - int2 pad0; // Left/top padding. - float gain; // Additional gain factor. - float slope; // Leaky ReLU slope on negative side. - float clamp; // Clamp after nonlinearity. - int flip; // Filter kernel flip for gradient computation. - - int tilesXdim; // Original number of horizontal output tiles. - int tilesXrep; // Number of horizontal tiles per CTA. - int blockZofs; // Block z offset to support large minibatch, channel dimensions. - - int4 xShape; // [width, height, channel, batch] - int4 yShape; // [width, height, channel, batch] - int2 sShape; // [width, height] - width is in bytes. Contiguous. Zeros if unused. - int2 sOfs; // [ofs_x, ofs_y] - offset between upsampled data and sign tensor. - int swLimit; // Active width of sign tensor in bytes. - - longlong4 xStride; // Strides of all tensors except signs, same component order as shapes. - longlong4 yStride; // - int64_t bStride; // - longlong3 fuStride; // - longlong3 fdStride; // -}; - -struct filtered_lrelu_act_kernel_params -{ - void* x; // Input/output, modified in-place. - unsigned char* s; // Sign tensor in/out. NULL if unused. - - float gain; // Additional gain factor. - float slope; // Leaky ReLU slope on negative side. - float clamp; // Clamp after nonlinearity. - - int4 xShape; // [width, height, channel, batch] - longlong4 xStride; // Input/output tensor strides, same order as in shape. - int2 sShape; // [width, height] - width is in elements. Contiguous. Zeros if unused. - int2 sOfs; // [ofs_x, ofs_y] - offset between upsampled data and sign tensor. -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct filtered_lrelu_kernel_spec -{ - void* setup; // Function for filter kernel setup. - void* exec; // Function for main operation. - int2 tileOut; // Width/height of launch tile. - int numWarps; // Number of warps per thread block, determines launch block size. - int xrep; // For processing multiple horizontal tiles per thread block. - int dynamicSharedKB; // How much dynamic shared memory the exec kernel wants. -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template filtered_lrelu_kernel_spec choose_filtered_lrelu_kernel(const filtered_lrelu_kernel_params& p, int sharedKB); -template void* choose_filtered_lrelu_act_kernel(void); -template cudaError_t copy_filters(cudaStream_t stream); - -//------------------------------------------------------------------------ diff --git a/spaces/Duskfallcrew/duskfall-alters-portrait-plus/README.md b/spaces/Duskfallcrew/duskfall-alters-portrait-plus/README.md deleted file mode 100644 index 6791e20a8cbadbeb1e717a0b7720378d5b89fc23..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/duskfall-alters-portrait-plus/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Duskfall Alters Portrait Plus -emoji: 👀 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ECCV2022/bytetrack/deploy/ONNXRuntime/README.md b/spaces/ECCV2022/bytetrack/deploy/ONNXRuntime/README.md deleted file mode 100644 index 4d0669081db3549f6db4a14189e73640de0688e2..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/deploy/ONNXRuntime/README.md +++ /dev/null @@ -1,19 +0,0 @@ -## ByteTrack-ONNXRuntime in Python - -This doc introduces how to convert your pytorch model into onnx, and how to run an onnxruntime demo to verify your convertion. - -### Convert Your Model to ONNX - -```shell -cd -python3 tools/export_onnx.py --output-name bytetrack_s.onnx -f exps/example/mot/yolox_s_mix_det.py -c pretrained/bytetrack_s_mot17.pth.tar -``` - -### ONNXRuntime Demo - -You can run onnx demo with **16 FPS** (96-core Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz): - -```shell -cd /deploy/ONNXRuntime -python3 onnx_inference.py -``` diff --git a/spaces/ECCV2022/bytetrack/yolox/models/yolox.py b/spaces/ECCV2022/bytetrack/yolox/models/yolox.py deleted file mode 100644 index 2f1fa1b34baaf6e0241cf289a2f73db48b33d914..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/models/yolox.py +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env python -# -*- encoding: utf-8 -*- -# Copyright (c) 2014-2021 Megvii Inc. All rights reserved. - -import torch.nn as nn - -from .yolo_head import YOLOXHead -from .yolo_pafpn import YOLOPAFPN - - -class YOLOX(nn.Module): - """ - YOLOX model module. The module list is defined by create_yolov3_modules function. - The network returns loss values from three YOLO layers during training - and detection results during test. - """ - - def __init__(self, backbone=None, head=None): - super().__init__() - if backbone is None: - backbone = YOLOPAFPN() - if head is None: - head = YOLOXHead(80) - - self.backbone = backbone - self.head = head - - def forward(self, x, targets=None): - # fpn output content features of [dark3, dark4, dark5] - fpn_outs = self.backbone(x) - - if self.training: - assert targets is not None - loss, iou_loss, conf_loss, cls_loss, l1_loss, num_fg = self.head( - fpn_outs, targets, x - ) - outputs = { - "total_loss": loss, - "iou_loss": iou_loss, - "l1_loss": l1_loss, - "conf_loss": conf_loss, - "cls_loss": cls_loss, - "num_fg": num_fg, - } - else: - outputs = self.head(fpn_outs) - - return outputs diff --git a/spaces/Epoching/DocumentQA/DiT_Extractor/base_utils.py b/spaces/Epoching/DocumentQA/DiT_Extractor/base_utils.py deleted file mode 100644 index 368333b3218d90acd366cac2a5b370fdfbdcda44..0000000000000000000000000000000000000000 --- a/spaces/Epoching/DocumentQA/DiT_Extractor/base_utils.py +++ /dev/null @@ -1,378 +0,0 @@ -# Copyright (c) 2022, Lawrence Livermore National Security, LLC. -# All rights reserved. -# See the top-level LICENSE and NOTICE files for details. -# LLNL-CODE-838964 - -# SPDX-License-Identifier: Apache-2.0-with-LLVM-exception - -from pdfminer.pdfpage import PDFParser -from pdfminer.pdfpage import PDFDocument -from pdfminer.pdfpage import PDFPage -from pdfminer.layout import LTTextBoxHorizontal -from pdfminer.layout import LTTextLineHorizontal -from pdfminer.layout import LTChar -from pdfminer.layout import LAParams -from pdfminer.layout import LTRect -from pdfminer.layout import LTFigure - -from pdfminer.converter import PDFPageAggregator -from pdfminer.pdfinterp import PDFResourceManager -from pdfminer.pdfinterp import PDFPageInterpreter -from pdfminer import pdfinterp - -from collections.abc import Iterable -from collections import Counter -from collections import OrderedDict - -import os - -# This is use for highlighting in PDFs -from PyPDF2.generic import ( - DictionaryObject, - NumberObject, - FloatObject, - NameObject, - TextStringObject, - ArrayObject -) - -# Used to extract pages -from PyPDF2 import PdfFileReader, PdfFileWriter - -def get_page_sizes(document): - parser = PDFParser(open(document, 'rb')) - doc = PDFDocument(parser) - pageSizesList = [] - for page in PDFPage.create_pages(doc): - # the media box that is the page size as list of 4 integers x0 y0 x1 y1 - pageSizesList.append(page.mediabox) # <- appending - return pageSizesList - -def get_page_count(document): - # Is there a better way of getting the page count than doing this? - parser = PDFParser(document) - tmpdoc = PDFDocument(parser) - page_count = pdfinterp.resolve1(tmpdoc.catalog['Pages'])['Count'] - return page_count - -def get_pdf_page_count(filename): - with open(filename, 'rb') as document: - return get_page_count(document) - -def get_pages(document, page_numbers = None): - #Create resource manager - rsrcmgr = PDFResourceManager() - # Set parameters for analysis. - laparams = LAParams() - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - interpreter = PDFPageInterpreter(rsrcmgr, device) - - page_count = get_page_count(document) - - if page_numbers is None: - page_numbers = range(page_count) - - for page, page_number in zip(PDFPage.get_pages(document, page_numbers), page_numbers): - interpreter.process_page(page) - # receive the LTPage object for the page. - layout = device.get_result() - #print("Yield page:", page_number) - yield layout, page_number - -def partial_overlaps(box, other): - """ - Determine if the two bounding boxes overlap eachother. - TODO: Really should just use a standard Python library for this. - - box -- 2 coordinate bounding box (x1,y1,x2,y2) - other -- 2 coordinate bounding box (x1,y1,x2,y2) - """ - # a1 x1 a2 x2 - # <------------------> - x_intersects = (other[0] < box[0] and other[2] > box[0]) or ( - other[0] < box[2] and other[2] > box[2]) - y_intersects = (other[1] < box[1] and other[3] > box[1]) or ( - other[1] < box[3] and other[3] > box[3]) - - intersects = x_intersects or y_intersects - # TODO: Simplify? - return intersects and overlaps(box, other) - #return intersects - -def overlaps(box, other): - """ - Determine if the two bounding boxes overlap eachother. - TODO: Really should just use a standard Python library for this. - - box -- 2 coordinate bounding box (x1,y1,x2,y2) - other -- 2 coordinate bounding box (x1,y1,x2,y2) - """ - x_intersects = box[0] > other[2] or box[2] < other[0] - y_intersects = box[1] > other[3] or box[3] < other[1] - - intersects = not (x_intersects or y_intersects) - return intersects - -def union(src, other): - """ - Expand src by union of other bbox - - src -- 2 coordinate bounding box (x1,y1,x2,y2) - other -- 2 coordinate bounding box (x1,y1,x2,y2) - - returns union of src and other - """ - xmin = min(src[0], other[0]) - ymin = min(src[1], other[1]) - xmax = max(src[2], other[2]) - ymax = max(src[3], other[3]) - - return [xmin, ymin, xmax, ymax] - - - -# See: https://gist.github.com/agentcooper/4c55133f5d95866acdee5017cd318558#file-pypdf2highlight-py -# x1, y1 starts in bottom left corner -def createHighlight(x1, y1, x2, y2, meta, color = [1, 0, 0]): - newHighlight = DictionaryObject() - - newHighlight.update({ - NameObject("/F"): NumberObject(4), - NameObject("/Type"): NameObject("/Annot"), - NameObject("/Subtype"): NameObject("/Highlight"), - - NameObject("/T"): TextStringObject(meta["author"]), - NameObject("/Contents"): TextStringObject(meta["contents"]), - - NameObject("/C"): ArrayObject([FloatObject(c) for c in color]), - NameObject("/Rect"): ArrayObject([ - FloatObject(x1), - FloatObject(y1), - FloatObject(x2), - FloatObject(y2) - ]), - NameObject("/QuadPoints"): ArrayObject([ - FloatObject(x1), - FloatObject(y2), - FloatObject(x2), - FloatObject(y2), - FloatObject(x1), - FloatObject(y1), - FloatObject(x2), - FloatObject(y1) - ]), - }) - - return newHighlight - -def addHighlightToPage(highlight, page, output): - highlight_ref = output._addObject(highlight); - - if "/Annots" in page: - page[NameObject("/Annots")].append(highlight_ref) - else: - page[NameObject("/Annots")] = ArrayObject([highlight_ref]) - -def get_pdf_words(document, page_numbers=None): - """ - Get all words from LTChar or LTTextLineHorizontal objects from the document. - - :param document: string path of the PDF file to process - :returns: A map of page #'s containing lists of coordinates and PDFMiner - objects. Ex.: {page_number: [[x1, y1, x2, y2, ],]} - """ - pdf_doc = open(document, 'rb') - - bboxes = {} - for layout, page in get_pages(pdf_doc, page_numbers): - #print(element.get_text()) - bboxes[page] = [] - for element in layout: - if not isinstance(element, Iterable): - continue # not iterable - for subElement in element: - #print('Subelement type:', type(subElement)) - if isinstance(subElement, LTChar): - if (subElement.get_text() == ' '): - pass # TODO: Handle word deliminator - # Print the character in this class - # print(subElement.get_text(), end='') - item = list(subElement.bbox) - item.append(subElement) - bboxes[page].append(item) - elif isinstance(subElement, LTTextLineHorizontal): - #print(subElement.bbox) - item = list(subElement.bbox) - item.append(subElement) - bboxes[page].append(item) - else: - pass - return bboxes - -def get_paragraphs(words): - paragraph_tolerance = 0.1 - max_height_diff = 1 - paragraphs = [] - - for page, elements in words.items(): - # Find nominal font size - # Round to int - freq = Counter() - for element in elements: - height = int(element[3] - element[1]) - #print(height,end=' ') - freq[height] += 1 - - nominal_font = freq.most_common(1)[0][0] - print("Nominal font is:", nominal_font) - - print("Page:", page) - x_offset_prev_line = None - prev_x_offset = None - prev_y_offset = None - paragraph_content = "" - #print("Element count:", len(elements)) - first_line = False - processed_first_line = False - - for element in elements: - x_offset = element[0] - y_offset = element[1] - height = int(element[3] - element[1]) - text = element[4].get_text() - - if x_offset_prev_line != None: - large_x_offset = (abs(x_offset_prev_line - x_offset) > paragraph_tolerance) - - # Font size mismatch? - if abs(height - nominal_font) > max_height_diff: - if len(paragraph_content) > 0: - print("Content append:", len(paragraph_content)) - paragraphs.append(paragraph_content) - paragraph_content = "" - print("Continue due to height != nominal_font") - continue - - print("ELEMENT:", element[0:4], text[0:15]) - if prev_y_offset is not None and len(paragraph_content) > 0: - if y_offset < prev_y_offset - height * 1.5: - print("Content append:", len(paragraph_content)) - if len(paragraph_content) > 0: - paragraphs.append(paragraph_content) - paragraph_content = text - prev_y_offset = None - continue - - prev_y_offset = y_offset - - prev_y_offset = y_offset - #print("element:", element) - if not isinstance(element[4], LTTextLineHorizontal): - continue - - #print("Running text:", text) - #print(f"x_offset_prev_line , x_offset]: {x_offset_prev_line, x_offset}") - - - # Find first paragraph - if x_offset_prev_line is None: - #print("x_offset_prev is none") - x_offset_prev_line = x_offset - if not processed_first_line: - first_line = True - processed_first_line = True - if height == nominal_font: - paragraph_content += text - #print("Continue due to x_offset_prev_line is none") - continue - - - - # Check case if first line was indented - if x_offset_prev_line > x_offset and first_line: - #print("x_offset < element[0]") - first_line = False - paragraph_content += text - x_offset_prev_line = x_offset - #print("Continue due to x_offset_prev_line > x_offset and first_line") - continue - - # is this indented? - # and ignore small changes - if x_offset_prev_line < x_offset and large_x_offset: - #print(f"x_offset_prev_line > x_offset: {x_offset_prev_line, x_offset}") - if height == nominal_font and len(paragraph_content) > 0: - paragraphs.append(paragraph_content) - - paragraph_content = text - # Reset at next line read - # What if next paragraph is also indented??? - x_offset_prev_line = None - #print("Continue due to x_offset_prev_line < x_offset and large_x_offset") - continue - - #print(element[0:4]) - if height == nominal_font: - paragraph_content += text - #print("End of loop") - - # TODO: Remove redundant space - if paragraph_content != "": - paragraphs.append(paragraph_content) - - # Find paragraph indexes - c = 0 - indexes = [] - for p in paragraphs: - c += len(p) - indexes.append(c) - - return paragraphs, indexes - -def get_pdf_elements(document, element_type, page_numbers=None): - pdf_doc = open(document, 'rb') - - items = {} - for layout, page in get_pages(pdf_doc, page_numbers): - #print(element.get_text()) - items[page] = [] - for element in layout: - if isinstance(element, element_type): - item = list(element.bbox) - if hasattr(element, 'non_stroking_color'): - item.append(element.non_stroking_color) - items[page].append(item) - print(items) - return items - -def get_large_colored_background_rectangles(document, page_numbers=None): - # Only include rectangles that are at least 4" x 1" in size - min_size = (288.0, 72.0) - - elements = get_pdf_elements(document, LTRect, page_numbers) - rects_out = {} - for page, rects in elements.items(): - print("Rects:", rects) - for rect in rects: - width = rect[2] - rect[0] - height = rect[3] - rect[1] - print("Dimensions:", width, height) - if (width > min_size[0] and - height > min_size[1]): - if not page in rects_out: - rects_out[page] = [] - rects_out[page].append(rect) - return rects_out - -def extract_pages(document, output, page_numbers=None): - pdf = PdfFileReader(document) - - pdf_writer = PdfFileWriter() - for page in page_numbers: - current_page = pdf.getPage(page) - pdf_writer.addPage(current_page) - - with open(output, "wb") as out: - pdf_writer.write(out) - diff --git a/spaces/EsoCode/text-generation-webui/extensions/send_pictures/script.py b/spaces/EsoCode/text-generation-webui/extensions/send_pictures/script.py deleted file mode 100644 index dbbeb0fd7c13b169d60999b236a3237ac3a5f0c5..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/send_pictures/script.py +++ /dev/null @@ -1,47 +0,0 @@ -import base64 -from io import BytesIO - -import gradio as gr -import torch -from transformers import BlipForConditionalGeneration, BlipProcessor - -from modules import chat, shared -from modules.ui import gather_interface_values - -# If 'state' is True, will hijack the next chat generation with -# custom input text given by 'value' in the format [text, visible_text] -input_hijack = { - 'state': False, - 'value': ["", ""] -} - -processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") -model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float32).to("cpu") - - -def caption_image(raw_image): - inputs = processor(raw_image.convert('RGB'), return_tensors="pt").to("cpu", torch.float32) - out = model.generate(**inputs, max_new_tokens=100) - return processor.decode(out[0], skip_special_tokens=True) - - -def generate_chat_picture(picture, name1, name2): - text = f'*{name1} sends {name2} a picture that contains the following: “{caption_image(picture)}”*' - # lower the resolution of sent images for the chat, otherwise the log size gets out of control quickly with all the base64 values in visible history - picture.thumbnail((300, 300)) - buffer = BytesIO() - picture.save(buffer, format="JPEG") - img_str = base64.b64encode(buffer.getvalue()).decode('utf-8') - visible_text = f'{text}' - return text, visible_text - - -def ui(): - picture_select = gr.Image(label='Send a picture', type='pil') - - # Prepare the input hijack, update the interface values, call the generation function, and clear the picture - picture_select.upload( - lambda picture, name1, name2: input_hijack.update({"state": True, "value": generate_chat_picture(picture, name1, name2)}), [picture_select, shared.gradio['name1'], shared.gradio['name2']], None).then( - gather_interface_values, [shared.gradio[k] for k in shared.input_elements], shared.gradio['interface_state']).then( - chat.generate_chat_reply_wrapper, shared.input_params, shared.gradio['display'], show_progress=False).then( - lambda: None, None, picture_select, show_progress=False) diff --git a/spaces/EsoCode/text-generation-webui/extensions/superbooga/script.py b/spaces/EsoCode/text-generation-webui/extensions/superbooga/script.py deleted file mode 100644 index 66c79b3d91fcdde7bc72ce56522972d31d190c43..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/superbooga/script.py +++ /dev/null @@ -1,257 +0,0 @@ -import re -import textwrap - -import gradio as gr -from bs4 import BeautifulSoup - -from modules import chat, shared -from modules.logging_colors import logger - -from .chromadb import add_chunks_to_collector, make_collector -from .download_urls import download_urls - -params = { - 'chunk_count': 5, - 'chunk_count_initial': 10, - 'time_weight': 0, - 'chunk_length': 700, - 'chunk_separator': '', - 'strong_cleanup': False, - 'threads': 4, -} - -collector = make_collector() -chat_collector = make_collector() - - -def feed_data_into_collector(corpus, chunk_len, chunk_sep): - global collector - - # Defining variables - chunk_len = int(chunk_len) - chunk_sep = chunk_sep.replace(r'\n', '\n') - cumulative = '' - - # Breaking the data into chunks and adding those to the db - cumulative += "Breaking the input dataset...\n\n" - yield cumulative - if chunk_sep: - data_chunks = corpus.split(chunk_sep) - data_chunks = [[data_chunk[i:i + chunk_len] for i in range(0, len(data_chunk), chunk_len)] for data_chunk in data_chunks] - data_chunks = [x for y in data_chunks for x in y] - else: - data_chunks = [corpus[i:i + chunk_len] for i in range(0, len(corpus), chunk_len)] - - cumulative += f"{len(data_chunks)} chunks have been found.\n\nAdding the chunks to the database...\n\n" - yield cumulative - add_chunks_to_collector(data_chunks, collector) - cumulative += "Done." - yield cumulative - - -def feed_file_into_collector(file, chunk_len, chunk_sep): - yield 'Reading the input dataset...\n\n' - text = file.decode('utf-8') - for i in feed_data_into_collector(text, chunk_len, chunk_sep): - yield i - - -def feed_url_into_collector(urls, chunk_len, chunk_sep, strong_cleanup, threads): - all_text = '' - cumulative = '' - - urls = urls.strip().split('\n') - cumulative += f'Loading {len(urls)} URLs with {threads} threads...\n\n' - yield cumulative - for update, contents in download_urls(urls, threads=threads): - yield cumulative + update - - cumulative += 'Processing the HTML sources...' - yield cumulative - for content in contents: - soup = BeautifulSoup(content, features="html.parser") - for script in soup(["script", "style"]): - script.extract() - - strings = soup.stripped_strings - if strong_cleanup: - strings = [s for s in strings if re.search("[A-Za-z] ", s)] - - text = '\n'.join([s.strip() for s in strings]) - all_text += text - - for i in feed_data_into_collector(all_text, chunk_len, chunk_sep): - yield i - - -def apply_settings(chunk_count, chunk_count_initial, time_weight): - global params - params['chunk_count'] = int(chunk_count) - params['chunk_count_initial'] = int(chunk_count_initial) - params['time_weight'] = time_weight - settings_to_display = {k: params[k] for k in params if k in ['chunk_count', 'chunk_count_initial', 'time_weight']} - yield f"The following settings are now active: {str(settings_to_display)}" - - -def custom_generate_chat_prompt(user_input, state, **kwargs): - global chat_collector - - if state['mode'] == 'instruct': - results = collector.get_sorted(user_input, n_results=params['chunk_count']) - additional_context = '\nYour reply should be based on the context below:\n\n' + '\n'.join(results) - user_input += additional_context - else: - - def make_single_exchange(id_): - output = '' - output += f"{state['name1']}: {shared.history['internal'][id_][0]}\n" - output += f"{state['name2']}: {shared.history['internal'][id_][1]}\n" - return output - - if len(shared.history['internal']) > params['chunk_count'] and user_input != '': - chunks = [] - hist_size = len(shared.history['internal']) - for i in range(hist_size-1): - chunks.append(make_single_exchange(i)) - - add_chunks_to_collector(chunks, chat_collector) - query = '\n'.join(shared.history['internal'][-1] + [user_input]) - try: - best_ids = chat_collector.get_ids_sorted(query, n_results=params['chunk_count'], n_initial=params['chunk_count_initial'], time_weight=params['time_weight']) - additional_context = '\n' - for id_ in best_ids: - if shared.history['internal'][id_][0] != '<|BEGIN-VISIBLE-CHAT|>': - additional_context += make_single_exchange(id_) - - logger.warning(f'Adding the following new context:\n{additional_context}') - state['context'] = state['context'].strip() + '\n' + additional_context - kwargs['history'] = { - 'internal': [shared.history['internal'][i] for i in range(hist_size) if i not in best_ids], - 'visible': '' - } - except RuntimeError: - logger.error("Couldn't query the database, moving on...") - - return chat.generate_chat_prompt(user_input, state, **kwargs) - - -def remove_special_tokens(string): - pattern = r'(<\|begin-user-input\|>|<\|end-user-input\|>|<\|injection-point\|>)' - return re.sub(pattern, '', string) - - -def input_modifier(string): - if shared.is_chat(): - return string - - # Find the user input - pattern = re.compile(r"<\|begin-user-input\|>(.*?)<\|end-user-input\|>", re.DOTALL) - match = re.search(pattern, string) - if match: - user_input = match.group(1).strip() - - # Get the most similar chunks - results = collector.get_sorted(user_input, n_results=params['chunk_count']) - - # Make the injection - string = string.replace('<|injection-point|>', '\n'.join(results)) - - return remove_special_tokens(string) - - -def ui(): - with gr.Accordion("Click for more information...", open=False): - gr.Markdown(textwrap.dedent(""" - - ## About - - This extension takes a dataset as input, breaks it into chunks, and adds the result to a local/offline Chroma database. - - The database is then queried during inference time to get the excerpts that are closest to your input. The idea is to create an arbitrarily large pseudo context. - - The core methodology was developed and contributed by kaiokendev, who is working on improvements to the method in this repository: https://github.com/kaiokendev/superbig - - ## Data input - - Start by entering some data in the interface below and then clicking on "Load data". - - Each time you load some new data, the old chunks are discarded. - - ## Chat mode - - #### Instruct - - On each turn, the chunks will be compared to your current input and the most relevant matches will be appended to the input in the following format: - - ``` - Consider the excerpts below as additional context: - ... - ``` - - The injection doesn't make it into the chat history. It is only used in the current generation. - - #### Regular chat - - The chunks from the external data sources are ignored, and the chroma database is built based on the chat history instead. The most relevant past exchanges relative to the present input are added to the context string. This way, the extension acts as a long term memory. - - ## Notebook/default modes - - Your question must be manually specified between `<|begin-user-input|>` and `<|end-user-input|>` tags, and the injection point must be specified with `<|injection-point|>`. - - The special tokens mentioned above (`<|begin-user-input|>`, `<|end-user-input|>`, and `<|injection-point|>`) are removed in the background before the text generation begins. - - Here is an example in Vicuna 1.1 format: - - ``` - A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. - - USER: - - <|begin-user-input|> - What datasets are mentioned in the text below? - <|end-user-input|> - - <|injection-point|> - - ASSISTANT: - ``` - - ⚠️ For best results, make sure to remove the spaces and new line characters after `ASSISTANT:`. - - *This extension is currently experimental and under development.* - - """)) - - with gr.Row(): - with gr.Column(min_width=600): - with gr.Tab("Text input"): - data_input = gr.Textbox(lines=20, label='Input data') - update_data = gr.Button('Load data') - - with gr.Tab("URL input"): - url_input = gr.Textbox(lines=10, label='Input URLs', info='Enter one or more URLs separated by newline characters.') - strong_cleanup = gr.Checkbox(value=params['strong_cleanup'], label='Strong cleanup', info='Only keeps html elements that look like long-form text.') - threads = gr.Number(value=params['threads'], label='Threads', info='The number of threads to use while downloading the URLs.', precision=0) - update_url = gr.Button('Load data') - - with gr.Tab("File input"): - file_input = gr.File(label='Input file', type='binary') - update_file = gr.Button('Load data') - - with gr.Tab("Generation settings"): - chunk_count = gr.Number(value=params['chunk_count'], label='Chunk count', info='The number of closest-matching chunks to include in the prompt.') - gr.Markdown('Time weighting (optional, used in to make recently added chunks more likely to appear)') - time_weight = gr.Slider(0, 1, value=params['time_weight'], label='Time weight', info='Defines the strength of the time weighting. 0 = no time weighting.') - chunk_count_initial = gr.Number(value=params['chunk_count_initial'], label='Initial chunk count', info='The number of closest-matching chunks retrieved for time weight reordering in chat mode. This should be >= chunk count. -1 = All chunks are retrieved. Only used if time_weight > 0.') - - update_settings = gr.Button('Apply changes') - - chunk_len = gr.Number(value=params['chunk_length'], label='Chunk length', info='In characters, not tokens. This value is used when you click on "Load data".') - chunk_sep = gr.Textbox(value=params['chunk_separator'], label='Chunk separator', info='Used to manually split chunks. Manually split chunks longer than chunk length are split again. This value is used when you click on "Load data".') - with gr.Column(): - last_updated = gr.Markdown() - - update_data.click(feed_data_into_collector, [data_input, chunk_len, chunk_sep], last_updated, show_progress=False) - update_url.click(feed_url_into_collector, [url_input, chunk_len, chunk_sep, strong_cleanup, threads], last_updated, show_progress=False) - update_file.click(feed_file_into_collector, [file_input, chunk_len, chunk_sep], last_updated, show_progress=False) - update_settings.click(apply_settings, [chunk_count, chunk_count_initial, time_weight], last_updated, show_progress=False) diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/weights/download_weights.sh b/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/weights/download_weights.sh deleted file mode 100644 index 206b7002aecaabdd0bbe1a721ff3a20860d0245d..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/weights/download_weights.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/bin/bash -# Download common models - -python -c " -from utils.google_utils import *; -attempt_download('weights/yolov5s.pt'); -attempt_download('weights/yolov5m.pt'); -attempt_download('weights/yolov5l.pt'); -attempt_download('weights/yolov5x.pt') -" diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/nrtr_pipeline.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/nrtr_pipeline.py deleted file mode 100644 index 71a19804309aa6692970b5eef642eddf87770559..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/nrtr_pipeline.py +++ /dev/null @@ -1,38 +0,0 @@ -img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=32, - min_width=32, - max_width=160, - keep_aspect_ratio=True, - width_downsample_ratio=0.25), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'text', 'valid_ratio' - ]), -] - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=32, - min_width=32, - max_width=160, - keep_aspect_ratio=True), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'valid_ratio', - 'img_norm_cfg', 'ori_filename', 'img_shape' - ]) -] diff --git a/spaces/Evel/Evel_Space/utils.py b/spaces/Evel/Evel_Space/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/Evel/Evel_Space/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/pyinterface.h b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/pyinterface.h deleted file mode 100644 index e53c88ae9dfcea7f9766828168d3ad35a404b699..0000000000000000000000000000000000000000 --- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/pyinterface.h +++ /dev/null @@ -1,38 +0,0 @@ -#include -#include -#include -#include - -extern "C" { - -struct PM_shape_t { - int width, height, channels; -}; - -enum PM_dtype_e { - PM_UINT8, - PM_INT8, - PM_UINT16, - PM_INT16, - PM_INT32, - PM_FLOAT32, - PM_FLOAT64, -}; - -struct PM_mat_t { - void *data_ptr; - PM_shape_t shape; - int dtype; -}; - -void PM_set_random_seed(unsigned int seed); -void PM_set_verbose(int value); - -void PM_free_pymat(PM_mat_t pymat); -PM_mat_t PM_inpaint(PM_mat_t image, PM_mat_t mask, int patch_size); -PM_mat_t PM_inpaint_regularity(PM_mat_t image, PM_mat_t mask, PM_mat_t ijmap, int patch_size, float guide_weight); -PM_mat_t PM_inpaint2(PM_mat_t image, PM_mat_t mask, PM_mat_t global_mask, int patch_size); -PM_mat_t PM_inpaint2_regularity(PM_mat_t image, PM_mat_t mask, PM_mat_t global_mask, PM_mat_t ijmap, int patch_size, float guide_weight); - -} /* extern "C" */ - diff --git a/spaces/GAIR/Factool/factool/math/__init__.py b/spaces/GAIR/Factool/factool/math/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GIGACHAhoon/BasicNNYoutubeSentimentTop5CommentPrediction/README.md b/spaces/GIGACHAhoon/BasicNNYoutubeSentimentTop5CommentPrediction/README.md deleted file mode 100644 index 933b41f0870b80f7c9ff10f8cf085880986e5e4c..0000000000000000000000000000000000000000 --- a/spaces/GIGACHAhoon/BasicNNYoutubeSentimentTop5CommentPrediction/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BasicNNYoutubeSentimentTop5CommentPrediction -emoji: 💻 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/setting/__init__.py b/spaces/GaenKoki/voicevox/voicevox_engine/setting/__init__.py deleted file mode 100644 index ff399f92b662072737fe036b7c9832997a76a553..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/setting/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .Setting import CorsPolicyMode, Setting -from .SettingLoader import USER_SETTING_PATH, SettingLoader - -__all__ = [ - "USER_SETTING_PATH", - "CorsPolicyMode", - "Setting", - "SettingLoader", -] diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_car.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_car.py deleted file mode 100644 index cf8ae73ce67ed67b69bdf716244ae6b2a7ade004..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_car.py +++ /dev/null @@ -1,94 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class BuildCar(Task): - """Construct a simple car structure using blocks and cylinders.""" - - def __init__(self): - super().__init__() - self.max_steps = 15 - self.lang_template = "Construct a simple car structure using blocks and cylinders. " \ - "Firstly, create the base of the car by positioning two red blocks side by side. " \ - "Then, add the car body by stacking a blue block on top of the base. " \ - "For the wheels, place a black cylinder on each side of the base blocks." - self.task_completed_desc = "done building car." - self.additional_reset() - - def reset(self, env): - super().reset(env) - car_pose = ((0.5, 0.0, 0.0), (0,0,0,1)) # fixed pose - base_length = 0.04 - self.add_corner_anchor_for_pose(env, car_pose) - - # Add base blocks. Use box template so that we can change its size. - base_size = (0.02, 0.04, 0.02) - base_block_urdf = "box/box-template.urdf" - base_block_urdf = self.fill_template(base_block_urdf, {'DIM': base_size}) - anchor_base_poses = [(utils.apply(car_pose, (base_length / 2, base_length / 2, 0.001)), car_pose[1]), - (utils.apply(car_pose, (-base_length / 2, base_length / 2, 0.001)), car_pose[1])] - base_blocks = [] - - for idx in range(2): - base_block_pose = self.get_random_pose(env, base_size) - base_block_id = env.add_object(base_block_urdf, base_block_pose, color=utils.COLORS['red']) - base_blocks.append(base_block_id) - - # Add car body block. - body_size = (0.04, 0.02, 0.02) # x, y, z dimensions for the asset size - body_block_urdf = "box/box-template.urdf" - body_block_urdf = self.fill_template(body_block_urdf, {'DIM': body_size}) - body_block_pose = self.get_random_pose(env, body_size) - body_block_id = env.add_object(body_block_urdf, body_block_pose, color=utils.COLORS['blue']) - anchor_body_poses = [car_pose] - - wheel_length = 0.12 - anchor_wheel_poses = [(utils.apply(car_pose, ( wheel_length / 2, wheel_length / 2, 0.001)), car_pose[1]), - (utils.apply(car_pose, (-wheel_length / 2, wheel_length / 2, 0.001)), car_pose[1]), - (utils.apply(car_pose, ( wheel_length / 2, -wheel_length / 2, 0.001)), car_pose[1]), - (utils.apply(car_pose, (-wheel_length / 2, -wheel_length / 2, 0.001)), car_pose[1])] - - # Add wheels. - wheel_size = (0.02, 0.02, 0.02) # x, y, z dimensions for the asset size - wheel_urdf = 'cylinder/cylinder-template.urdf' - wheel_urdf = self.fill_template(wheel_urdf, {'DIM': wheel_size}) - - wheels = [] - for idx in range(4): - wheel_pose = self.get_random_pose(env, wheel_size) - wheel_id = env.add_object(wheel_urdf, wheel_pose, color=utils.COLORS['black']) - wheels.append(wheel_id) - - # Goal: Firstly, create the base of the car by positioning two red blocks side by side. - self.add_goal(objs=base_blocks, - matches=np.ones((2, 2)), - targ_poses=anchor_base_poses, - replace=False, - rotations=True, - metric='pose', - params=None, - step_max_reward=1./3, - language_goal="Firstly, create the base of the car by positioning two red blocks side by side.") - - # Then, add the car body by stacking a blue block on top of the base. - self.add_goal(objs=[body_block_id], - matches=np.ones((1, 1)), - targ_poses=anchor_body_poses, - replace=False, - rotations=True, - metric='pose', - params=None, - step_max_reward=1./3, - language_goal="Then, add the car body by stacking a blue block on top of the base.") - - # For the wheels, place a black cylinder on each side of the base blocks. - self.add_goal(objs=wheels, - matches=np.ones((4, 4)), - targ_poses=anchor_wheel_poses, - replace=False, - rotations=True, - metric='pose', - params=None, - step_max_reward=1./3, - language_goal="For the wheels, place a black cylinder on each side of the base blocks.") - diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/utils/logmmse.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/utils/logmmse.py deleted file mode 100644 index 58cc4502fa5ba0670678c3edaf5ba1587b8b58ea..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/utils/logmmse.py +++ /dev/null @@ -1,247 +0,0 @@ -# The MIT License (MIT) -# -# Copyright (c) 2015 braindead -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# -# -# This code was extracted from the logmmse package (https://pypi.org/project/logmmse/) and I -# simply modified the interface to meet my needs. - - -import numpy as np -import math -from scipy.special import expn -from collections import namedtuple - -NoiseProfile = namedtuple("NoiseProfile", "sampling_rate window_size len1 len2 win n_fft noise_mu2") - - -def profile_noise(noise, sampling_rate, window_size=0): - """ - Creates a profile of the noise in a given waveform. - - :param noise: a waveform containing noise ONLY, as a numpy array of floats or ints. - :param sampling_rate: the sampling rate of the audio - :param window_size: the size of the window the logmmse algorithm operates on. A default value - will be picked if left as 0. - :return: a NoiseProfile object - """ - noise, dtype = to_float(noise) - noise += np.finfo(np.float64).eps - - if window_size == 0: - window_size = int(math.floor(0.02 * sampling_rate)) - - if window_size % 2 == 1: - window_size = window_size + 1 - - perc = 50 - len1 = int(math.floor(window_size * perc / 100)) - len2 = int(window_size - len1) - - win = np.hanning(window_size) - win = win * len2 / np.sum(win) - n_fft = 2 * window_size - - noise_mean = np.zeros(n_fft) - n_frames = len(noise) // window_size - for j in range(0, window_size * n_frames, window_size): - noise_mean += np.absolute(np.fft.fft(win * noise[j:j + window_size], n_fft, axis=0)) - noise_mu2 = (noise_mean / n_frames) ** 2 - - return NoiseProfile(sampling_rate, window_size, len1, len2, win, n_fft, noise_mu2) - - -def denoise(wav, noise_profile: NoiseProfile, eta=0.15): - """ - Cleans the noise from a speech waveform given a noise profile. The waveform must have the - same sampling rate as the one used to create the noise profile. - - :param wav: a speech waveform as a numpy array of floats or ints. - :param noise_profile: a NoiseProfile object that was created from a similar (or a segment of - the same) waveform. - :param eta: voice threshold for noise update. While the voice activation detection value is - below this threshold, the noise profile will be continuously updated throughout the audio. - Set to 0 to disable updating the noise profile. - :return: the clean wav as a numpy array of floats or ints of the same length. - """ - wav, dtype = to_float(wav) - wav += np.finfo(np.float64).eps - p = noise_profile - - nframes = int(math.floor(len(wav) / p.len2) - math.floor(p.window_size / p.len2)) - x_final = np.zeros(nframes * p.len2) - - aa = 0.98 - mu = 0.98 - ksi_min = 10 ** (-25 / 10) - - x_old = np.zeros(p.len1) - xk_prev = np.zeros(p.len1) - noise_mu2 = p.noise_mu2 - for k in range(0, nframes * p.len2, p.len2): - insign = p.win * wav[k:k + p.window_size] - - spec = np.fft.fft(insign, p.n_fft, axis=0) - sig = np.absolute(spec) - sig2 = sig ** 2 - - gammak = np.minimum(sig2 / noise_mu2, 40) - - if xk_prev.all() == 0: - ksi = aa + (1 - aa) * np.maximum(gammak - 1, 0) - else: - ksi = aa * xk_prev / noise_mu2 + (1 - aa) * np.maximum(gammak - 1, 0) - ksi = np.maximum(ksi_min, ksi) - - log_sigma_k = gammak * ksi/(1 + ksi) - np.log(1 + ksi) - vad_decision = np.sum(log_sigma_k) / p.window_size - if vad_decision < eta: - noise_mu2 = mu * noise_mu2 + (1 - mu) * sig2 - - a = ksi / (1 + ksi) - vk = a * gammak - ei_vk = 0.5 * expn(1, np.maximum(vk, 1e-8)) - hw = a * np.exp(ei_vk) - sig = sig * hw - xk_prev = sig ** 2 - xi_w = np.fft.ifft(hw * spec, p.n_fft, axis=0) - xi_w = np.real(xi_w) - - x_final[k:k + p.len2] = x_old + xi_w[0:p.len1] - x_old = xi_w[p.len1:p.window_size] - - output = from_float(x_final, dtype) - output = np.pad(output, (0, len(wav) - len(output)), mode="constant") - return output - - -## Alternative VAD algorithm to webrctvad. It has the advantage of not requiring to install that -## darn package and it also works for any sampling rate. Maybe I'll eventually use it instead of -## webrctvad -# def vad(wav, sampling_rate, eta=0.15, window_size=0): -# """ -# TODO: fix doc -# Creates a profile of the noise in a given waveform. -# -# :param wav: a waveform containing noise ONLY, as a numpy array of floats or ints. -# :param sampling_rate: the sampling rate of the audio -# :param window_size: the size of the window the logmmse algorithm operates on. A default value -# will be picked if left as 0. -# :param eta: voice threshold for noise update. While the voice activation detection value is -# below this threshold, the noise profile will be continuously updated throughout the audio. -# Set to 0 to disable updating the noise profile. -# """ -# wav, dtype = to_float(wav) -# wav += np.finfo(np.float64).eps -# -# if window_size == 0: -# window_size = int(math.floor(0.02 * sampling_rate)) -# -# if window_size % 2 == 1: -# window_size = window_size + 1 -# -# perc = 50 -# len1 = int(math.floor(window_size * perc / 100)) -# len2 = int(window_size - len1) -# -# win = np.hanning(window_size) -# win = win * len2 / np.sum(win) -# n_fft = 2 * window_size -# -# wav_mean = np.zeros(n_fft) -# n_frames = len(wav) // window_size -# for j in range(0, window_size * n_frames, window_size): -# wav_mean += np.absolute(np.fft.fft(win * wav[j:j + window_size], n_fft, axis=0)) -# noise_mu2 = (wav_mean / n_frames) ** 2 -# -# wav, dtype = to_float(wav) -# wav += np.finfo(np.float64).eps -# -# nframes = int(math.floor(len(wav) / len2) - math.floor(window_size / len2)) -# vad = np.zeros(nframes * len2, dtype=np.bool) -# -# aa = 0.98 -# mu = 0.98 -# ksi_min = 10 ** (-25 / 10) -# -# xk_prev = np.zeros(len1) -# noise_mu2 = noise_mu2 -# for k in range(0, nframes * len2, len2): -# insign = win * wav[k:k + window_size] -# -# spec = np.fft.fft(insign, n_fft, axis=0) -# sig = np.absolute(spec) -# sig2 = sig ** 2 -# -# gammak = np.minimum(sig2 / noise_mu2, 40) -# -# if xk_prev.all() == 0: -# ksi = aa + (1 - aa) * np.maximum(gammak - 1, 0) -# else: -# ksi = aa * xk_prev / noise_mu2 + (1 - aa) * np.maximum(gammak - 1, 0) -# ksi = np.maximum(ksi_min, ksi) -# -# log_sigma_k = gammak * ksi / (1 + ksi) - np.log(1 + ksi) -# vad_decision = np.sum(log_sigma_k) / window_size -# if vad_decision < eta: -# noise_mu2 = mu * noise_mu2 + (1 - mu) * sig2 -# print(vad_decision) -# -# a = ksi / (1 + ksi) -# vk = a * gammak -# ei_vk = 0.5 * expn(1, np.maximum(vk, 1e-8)) -# hw = a * np.exp(ei_vk) -# sig = sig * hw -# xk_prev = sig ** 2 -# -# vad[k:k + len2] = vad_decision >= eta -# -# vad = np.pad(vad, (0, len(wav) - len(vad)), mode="constant") -# return vad - - -def to_float(_input): - if _input.dtype == np.float64: - return _input, _input.dtype - elif _input.dtype == np.float32: - return _input.astype(np.float64), _input.dtype - elif _input.dtype == np.uint8: - return (_input - 128) / 128., _input.dtype - elif _input.dtype == np.int16: - return _input / 32768., _input.dtype - elif _input.dtype == np.int32: - return _input / 2147483648., _input.dtype - raise ValueError('Unsupported wave file format') - - -def from_float(_input, dtype): - if dtype == np.float64: - return _input, np.float64 - elif dtype == np.float32: - return _input.astype(np.float32) - elif dtype == np.uint8: - return ((_input * 128) + 128).astype(np.uint8) - elif dtype == np.int16: - return (_input * 32768).astype(np.int16) - elif dtype == np.int32: - print(_input) - return (_input * 2147483648).astype(np.int32) - raise ValueError('Unsupported wave file format') diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person.py deleted file mode 100644 index b0164c75a976fa4dfd729147f9656d4e01c3529c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py' -model = dict(roi_head=dict(bbox_head=dict(num_classes=1))) -classes = ('person', ) -data = dict( - train=dict(classes=classes), - val=dict(classes=classes), - test=dict(classes=classes)) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth' # noqa diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w32_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w32_20e_coco.py deleted file mode 100644 index aee78089b9e32d3c0bcd6a29f51c22d1af96d2ce..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w32_20e_coco.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = '../htc/htc_r50_fpn_20e_coco.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w32', - backbone=dict( - _delete_=True, - type='HRNet', - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(32, 64)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(32, 64, 128)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(32, 64, 128, 256)))), - neck=dict( - _delete_=True, - type='HRFPN', - in_channels=[32, 64, 128, 256], - out_channels=256)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/yolo/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/yolo/README.md deleted file mode 100644 index 1f539c6f1df4f4fed9e2c5dfc3058dc5fef72d85..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/yolo/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# YOLOv3 - -## Introduction - -[ALGORITHM] - -```latex -@misc{redmon2018yolov3, - title={YOLOv3: An Incremental Improvement}, - author={Joseph Redmon and Ali Farhadi}, - year={2018}, - eprint={1804.02767}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` - -## Results and Models - -| Backbone | Scale | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: | -| DarkNet-53 | 320 | 273e | 2.7 | 63.9 | 27.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo/yolov3_d53_320_273e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/yolo/yolov3_d53_320_273e_coco/yolov3_d53_320_273e_coco-421362b6.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/yolo/yolov3_d53_320_273e_coco/yolov3_d53_320_273e_coco-20200819_172101.log.json) | -| DarkNet-53 | 416 | 273e | 3.8 | 61.2 | 30.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo/yolov3_d53_mstrain-416_273e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/yolo/yolov3_d53_mstrain-416_273e_coco/yolov3_d53_mstrain-416_273e_coco-2b60fcd9.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/yolo/yolov3_d53_mstrain-416_273e_coco/yolov3_d53_mstrain-416_273e_coco-20200819_173424.log.json) | -| DarkNet-53 | 608 | 273e | 7.1 | 48.1 | 33.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/yolo/yolov3_d53_mstrain-608_273e_coco/yolov3_d53_mstrain-608_273e_coco-139f5633.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/yolo/yolov3_d53_mstrain-608_273e_coco/yolov3_d53_mstrain-608_273e_coco-20200819_170820.log.json) | - -## Credit - -This implementation originates from the project of Haoyu Wu(@wuhy08) at Western Digital. diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/mask/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/mask/__init__.py deleted file mode 100644 index ab1e88bc686d5c2fe72b3114cb2b3e372e73a0f8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/mask/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .mask_target import mask_target -from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks -from .utils import encode_mask_results, split_combined_polys - -__all__ = [ - 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks', - 'PolygonMasks', 'encode_mask_results' -] diff --git a/spaces/Gradio-Blocks/uniformer_video_demo/README.md b/spaces/Gradio-Blocks/uniformer_video_demo/README.md deleted file mode 100644 index 83f8ad2b61fb7d558d63376f8c2ad99048dfd320..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_video_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uniformer_video_demo -emoji: 📹 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Hallucinate/demo/AdaBins-main/README.md b/spaces/Hallucinate/demo/AdaBins-main/README.md deleted file mode 100644 index 1ec72f816410cf719639d1fddac4e14ca92f2d2f..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/AdaBins-main/README.md +++ /dev/null @@ -1,68 +0,0 @@ -# AdaBins -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adabins-depth-estimation-using-adaptive-bins/monocular-depth-estimation-on-kitti-eigen)](https://paperswithcode.com/sota/monocular-depth-estimation-on-kitti-eigen?p=adabins-depth-estimation-using-adaptive-bins) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adabins-depth-estimation-using-adaptive-bins/monocular-depth-estimation-on-nyu-depth-v2)](https://paperswithcode.com/sota/monocular-depth-estimation-on-nyu-depth-v2?p=adabins-depth-estimation-using-adaptive-bins) - -Official implementation of [Adabins: Depth Estimation using adaptive bins](https://arxiv.org/abs/2011.14141) -## Download links -* You can download the pretrained models "AdaBins_nyu.pt" and "AdaBins_kitti.pt" from [here](https://drive.google.com/drive/folders/1nYyaQXOBjNdUJDsmJpcRpu6oE55aQoLA?usp=sharing) -* You can download the predicted depths in 16-bit format for NYU-Depth-v2 official test set and KITTI Eigen split test set [here](https://drive.google.com/drive/folders/1b3nfm8lqrvUjtYGmsqA5gptNQ8vPlzzS?usp=sharing) - -## Colab demo - -

              - - Open In Colab - -

              - -## Inference -Move the downloaded weights to a directory of your choice (we will use "./pretrained/" here). You can then use the pretrained models like so: - -```python -from models import UnetAdaptiveBins -import model_io -from PIL import Image - -MIN_DEPTH = 1e-3 -MAX_DEPTH_NYU = 10 -MAX_DEPTH_KITTI = 80 - -N_BINS = 256 - -# NYU -model = UnetAdaptiveBins.build(n_bins=N_BINS, min_val=MIN_DEPTH, max_val=MAX_DEPTH_NYU) -pretrained_path = "./pretrained/AdaBins_nyu.pt" -model, _, _ = model_io.load_checkpoint(pretrained_path, model) - -bin_edges, predicted_depth = model(example_rgb_batch) - -# KITTI -model = UnetAdaptiveBins.build(n_bins=N_BINS, min_val=MIN_DEPTH, max_val=MAX_DEPTH_KITTI) -pretrained_path = "./pretrained/AdaBins_kitti.pt" -model, _, _ = model_io.load_checkpoint(pretrained_path, model) - -bin_edges, predicted_depth = model(example_rgb_batch) -``` -Note that the model returns bin-edges (instead of bin-centers). - -**Recommended way:** `InferenceHelper` class in `infer.py` provides an easy interface for inference and handles various types of inputs (with any prepocessing required). It uses Test-Time-Augmentation (H-Flips) and also calculates bin-centers for you: -```python -from infer import InferenceHelper - -infer_helper = InferenceHelper(dataset='nyu') - -# predict depth of a batched rgb tensor -example_rgb_batch = ... -bin_centers, predicted_depth = infer_helper.predict(example_rgb_batch) - -# predict depth of a single pillow image -img = Image.open("test_imgs/classroom__rgb_00283.jpg") # any rgb pillow image -bin_centers, predicted_depth = infer_helper.predict_pil(img) - -# predict depths of images stored in a directory and store the predictions in 16-bit format in a given separate dir -infer_helper.predict_dir("/path/to/input/dir/containing_only_images/", "path/to/output/dir/") - -``` -## TODO: -* Add instructions for Evaluation and Training. -* Add UI demo -* Remove unnecessary dependencies diff --git a/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/cider/cider_scorer.py b/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/cider/cider_scorer.py deleted file mode 100644 index d7f9505916f2210617cc529bf3c05acfa06d5a62..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/cider/cider_scorer.py +++ /dev/null @@ -1,207 +0,0 @@ -#!/usr/bin/env python -# Tsung-Yi Lin -# Ramakrishna Vedantam -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import copy -import six -from six.moves import cPickle -from collections import defaultdict -import numpy as np -import math -import os - -def precook(s, n=4, out=False): - """ - Takes a string as input and returns an object that can be given to - either cook_refs or cook_test. This is optional: cook_refs and cook_test - can take string arguments as well. - :param s: string : sentence to be converted into ngrams - :param n: int : number of ngrams for which representation is calculated - :return: term frequency vector for occuring ngrams - """ - words = s.split() - counts = defaultdict(int) - for k in range(1,n+1): - for i in range(len(words)-k+1): - ngram = tuple(words[i:i+k]) - counts[ngram] += 1 - return counts - -def cook_refs(refs, n=4): ## lhuang: oracle will call with "average" - '''Takes a list of reference sentences for a single segment - and returns an object that encapsulates everything that BLEU - needs to know about them. - :param refs: list of string : reference sentences for some image - :param n: int : number of ngrams for which (ngram) representation is calculated - :return: result (list of dict) - ''' - return [precook(ref, n) for ref in refs] - -def cook_test(test, n=4): - '''Takes a test sentence and returns an object that - encapsulates everything that BLEU needs to know about it. - :param test: list of string : hypothesis sentence for some image - :param n: int : number of ngrams for which (ngram) representation is calculated - :return: result (dict) - ''' - return precook(test, n, True) - -class CiderScorer(object): - """CIDEr scorer. - """ - - def copy(self): - ''' copy the refs.''' - new = CiderScorer(n=self.n) - new.ctest = copy.copy(self.ctest) - new.crefs = copy.copy(self.crefs) - return new - - def __init__(self, df_mode="corpus", test=None, refs=None, n=4, sigma=6.0): - ''' singular instance ''' - self.n = n - self.sigma = sigma - self.crefs = [] - self.ctest = [] - self.df_mode = df_mode - self.ref_len = None - if self.df_mode != "corpus": - pkl_file = cPickle.load(open(os.path.join('data', df_mode + '.p'),'rb'), **(dict(encoding='latin1') if six.PY3 else {})) - self.ref_len = np.log(float(pkl_file['ref_len'])) - self.document_frequency = pkl_file['document_frequency'] - self.cook_append(test, refs) - - def clear(self): - self.crefs = [] - self.ctest = [] - - def cook_append(self, test, refs): - '''called by constructor and __iadd__ to avoid creating new instances.''' - - if refs is not None: - self.crefs.append(cook_refs(refs)) - if test is not None: - self.ctest.append(cook_test(test)) ## N.B.: -1 - else: - self.ctest.append(None) # lens of crefs and ctest have to match - - def size(self): - assert len(self.crefs) == len(self.ctest), "refs/test mismatch! %d<>%d" % (len(self.crefs), len(self.ctest)) - return len(self.crefs) - - def __iadd__(self, other): - '''add an instance (e.g., from another sentence).''' - - if type(other) is tuple: - ## avoid creating new CiderScorer instances - self.cook_append(other[0], other[1]) - else: - self.ctest.extend(other.ctest) - self.crefs.extend(other.crefs) - - return self - def compute_doc_freq(self): - ''' - Compute term frequency for reference data. - This will be used to compute idf (inverse document frequency later) - The term frequency is stored in the object - :return: None - ''' - for refs in self.crefs: - # refs, k ref captions of one image - for ngram in set([ngram for ref in refs for (ngram,count) in ref.items()]): - self.document_frequency[ngram] += 1 - # maxcounts[ngram] = max(maxcounts.get(ngram,0), count) - - def compute_cider(self): - def counts2vec(cnts): - """ - Function maps counts of ngram to vector of tfidf weights. - The function returns vec, an array of dictionary that store mapping of n-gram and tf-idf weights. - The n-th entry of array denotes length of n-grams. - :param cnts: - :return: vec (array of dict), norm (array of float), length (int) - """ - vec = [defaultdict(float) for _ in range(self.n)] - length = 0 - norm = [0.0 for _ in range(self.n)] - for (ngram,term_freq) in cnts.items(): - # give word count 1 if it doesn't appear in reference corpus - df = np.log(max(1.0, self.document_frequency[ngram])) - # ngram index - n = len(ngram)-1 - # tf (term_freq) * idf (precomputed idf) for n-grams - vec[n][ngram] = float(term_freq)*(self.ref_len - df) - # compute norm for the vector. the norm will be used for - # computing similarity - norm[n] += pow(vec[n][ngram], 2) - - if n == 1: - length += term_freq - norm = [np.sqrt(n) for n in norm] - return vec, norm, length - - def sim(vec_hyp, vec_ref, norm_hyp, norm_ref, length_hyp, length_ref): - ''' - Compute the cosine similarity of two vectors. - :param vec_hyp: array of dictionary for vector corresponding to hypothesis - :param vec_ref: array of dictionary for vector corresponding to reference - :param norm_hyp: array of float for vector corresponding to hypothesis - :param norm_ref: array of float for vector corresponding to reference - :param length_hyp: int containing length of hypothesis - :param length_ref: int containing length of reference - :return: array of score for each n-grams cosine similarity - ''' - delta = float(length_hyp - length_ref) - # measure consine similarity - val = np.array([0.0 for _ in range(self.n)]) - for n in range(self.n): - # ngram - for (ngram,count) in vec_hyp[n].items(): - val[n] += vec_hyp[n][ngram] * vec_ref[n][ngram] - - if (norm_hyp[n] != 0) and (norm_ref[n] != 0): - val[n] /= (norm_hyp[n]*norm_ref[n]) - - assert(not math.isnan(val[n])) - return val - - # compute log reference length - if self.df_mode == "corpus": - self.ref_len = np.log(float(len(self.crefs))) - - scores = [] - for test, refs in zip(self.ctest, self.crefs): - # compute vector for test captions - vec, norm, length = counts2vec(test) - # compute vector for ref captions - score = np.array([0.0 for _ in range(self.n)]) - for ref in refs: - vec_ref, norm_ref, length_ref = counts2vec(ref) - score += sim(vec, vec_ref, norm, norm_ref, length, length_ref) - # change by vrama91 - mean of ngram scores, instead of sum - score_avg = np.mean(score) - # divide by number of references - score_avg /= len(refs) - # multiply score by 10 - score_avg *= 10.0 - # append score of an image to the score list - scores.append(score_avg) - return scores - - def compute_score(self, option=None, verbose=0): - # compute idf - if self.df_mode == "corpus": - self.document_frequency = defaultdict(float) - self.compute_doc_freq() - # assert to check document frequency - assert(len(self.ctest) >= max(self.document_frequency.values())) - # import json for now and write the corresponding files - # compute cider score - score = self.compute_cider() - # debug - # print score - return np.mean(np.array(score)), np.array(score) diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/glow/prepare_iitm_data_glow.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/glow/prepare_iitm_data_glow.py deleted file mode 100644 index 9e1e5cb8cd85c88892371851917ec721c2c4b08e..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/glow/prepare_iitm_data_glow.py +++ /dev/null @@ -1,134 +0,0 @@ -import os -from glob import glob -import re -import string -import argparse - -import random -random.seed(42) - -def replace_extra_chars(line): - line = line.replace("(", "").replace( - ")", "" - ) # .replace('\u200d', ' ').replace('\ufeff', ' ').replace('\u200c', ' ').replace('\u200e', ' ') - # line = line.replace('“', ' ').replace('”', ' ').replace(':', ' ') - - return line.strip() - - -def write_txt(content, filename): - with open(filename, "w+", encoding="utf-8") as f: - f.write(content) - - -def save_train_test_valid_split(annotations_txt, num_samples_valid, num_samples_test): - with open(annotations_txt, encoding="utf-8") as f: - all_lines = [line.strip() for line in f.readlines()] - test_val_indices = random.sample( - range(len(all_lines)), num_samples_valid + num_samples_test - ) - valid_ix = test_val_indices[:num_samples_valid] - test_ix = test_val_indices[num_samples_valid:] - train = [line for i, line in enumerate(all_lines) if i not in test_val_indices] - valid = [line for i, line in enumerate(all_lines) if i in valid_ix] - test = [line for i, line in enumerate(all_lines) if i in test_ix] - - print(f"Num samples in train: {len(train)}") - print(f"Num samples in valid: {len(valid)}") - print(f"Num samples in test: {len(test)}") - - out_dir_path = "/".join(annotations_txt.split("/")[:-1]) - with open(os.path.join(out_dir_path, "train.txt"), "w+", encoding="utf-8") as f: - for line in train: - print(line, file=f) - with open(os.path.join(out_dir_path, "valid.txt"), "w+", encoding="utf-8") as f: - for line in valid: - print(line, file=f) - with open(os.path.join(out_dir_path, "test.txt"), "w+", encoding="utf-8") as f: - for line in test: - print(line, file=f) - print(f"train, test and valid txts saved in {out_dir_path}") - - -def save_txts_from_txt_done_data( - text_path, - wav_path_for_annotations_txt, - out_path_for_txts, - num_samples_valid, - num_samples_test, -): - outfile = os.path.join(out_path_for_txts, "annotations.txt") - with open(text_path) as file: - file_lines = file.readlines() - - # print(file_lines[0]) - - file_lines = [replace_extra_chars(line) for line in file_lines] - # print(file_lines[0]) - - fnames, ftexts = [], [] - for line in file_lines: - elems = line.split('"') - fnames.append(elems[0].strip()) - ftexts.append(elems[1].strip()) - - all_chars = list(set("".join(ftexts))) - punct_with_space = [i for i in all_chars if i in list(string.punctuation)] + [" "] - chars = [i for i in all_chars if i not in punct_with_space if i.strip()] - chars = "".join(chars) - punct_with_space = "".join(punct_with_space) - - with open('../../config/glow/base_blank.json', 'r') as jfile: - json_config = json.load(jfile) - - json_config["data"]["chars"] = chars - json_config["data"]["punc"] = punct_with_space - json_config["data"]["training_files"]=out_path_for_txts + '/train.txt' - json_config["data"]["validation_files"] = out_path_for_txts + '/valid.txt' - new_config_name = out_path_for_txts.split('/')[-1] - with open(f'../../config/glow/{new_config_name}.json','w+') as jfile: - json.dump(json_config, jfile) - - print(f"Characters: {chars}") - print(f"Punctuation: {punct_with_space}") - print(f"Config file is stored at ../../config/glow/{new_config_name}.json") - - outfile_f = open(outfile, "w+", encoding="utf-8") - for f, t in zip(fnames, ftexts): - print( - os.path.join(wav_path_for_annotations_txt, f) + ".wav", - t, - sep="|", - file=outfile_f, - ) - outfile_f.close() - write_txt(punct_with_space, os.path.join(out_path_for_txts, "punc.txt")) - write_txt(chars, os.path.join(out_path_for_txts, "chars.txt")) - - save_train_test_valid_split( - annotations_txt=outfile, - num_samples_valid=num_samples_valid, - num_samples_test=num_samples_test, - ) - - - - -if __name__ == "__main__": - - - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--text-path", type=str, required=True) - parser.add_argument("-o", "--output-path", type=str, required=True) - parser.add_argument("-w", "--wav-path", type=str, required=True) - parser.add_argument("-v", "--valid-samples", type=int, default = 100) - parser.add_argument("-t", "--test-samples", type=int, default = 10) - args = parser.parse_args() - - save_txts_from_txt_done_data( - args.text_path, - args.wav_path, - args.output_path, - args.valid_samples, - args.test_samples, - ) diff --git a/spaces/Harveenchadha/en_to_indic_translation/app.py b/spaces/Harveenchadha/en_to_indic_translation/app.py deleted file mode 100644 index 5f7cf6933c43c2d0f4245de8b22539995e0c52d1..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import os -#import gradio as gr - -os.system('wget -q https://storage.googleapis.com/vakyaansh-open-models/translation_models/en-indic.zip') -os.system('unzip /home/user/app/en-indic.zip') -os.system('pip uninstall -y numpy') -os.system('pip install numpy') -#os.system('pip uninstall -y numba') -#os.system('pip install numba==0.53') - -from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils -import gradio as grd -from inference.engine import Model -indic2en_model = Model(expdir='en-indic') - -INDIC = {"Assamese": "as", "Bengali": "bn", "Gujarati": "gu", "Hindi": "hi","Kannada": "kn","Malayalam": "ml", "Marathi": "mr", "Odia": "or","Punjabi": "pa","Tamil": "ta", "Telugu" : "te"} - - -def translate(text, lang): - return indic2en_model.translate_paragraph(text, 'en', INDIC[lang]) - - - -languages = list(INDIC.keys()) - -#print(translate('helo how are you')) -ddwn = grd.inputs.Dropdown(languages, type="value", default="Hindi", label="Select Target Language") -txt = grd.inputs.Textbox( lines=5, placeholder="Enter Text to translate", default="", label="Enter Text in English") -txt_ouptut = grd.outputs.Textbox(type="auto", label="Translated text in Target Language") - -example=[['I want to translate this sentence in Hindi','Hindi'], - ['I am feeling very good today.', 'Bengali']] - -supp = ','.join(languages) -iface = grd.Interface(fn=translate, inputs=[txt,ddwn] , outputs=txt_ouptut, title='Translation for 11 Indic Languages', description = 'This is a demo based on IndicTrans. Languages Supported: '+supp, article = 'Original repo [link](https://github.com/AI4Bharat/indicTrans) by AI4Bharat. Note: This space can only perform translation from English to Indic languages. Support for other combinations will be provided soon.', examples=example) -iface.launch(enable_queue=True) diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.50b5507a.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.50b5507a.js deleted file mode 100644 index be267cb1caca9a9db2ebce29af342de8fa84e7de..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.50b5507a.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as pe,i as ke,s as ve,ae as te,P as we,c as M,m as N,l as T,j as y,k as B,o as S,Z as ye,R as je,T as ze,a as D,B as Ae,f as C,U as Be,V as Ce,D as E,E as H,n as G,aa as Ge,I as F,e as w,b as h,M as L,Y as J,d as A,g as j,t as ne,h as ie,C as oe,A as De,ai as Ie,x as U}from"./index.396f4a72.js";import{B as Re}from"./BlockLabel.37da86a3.js";import{M as Le}from"./ModifyUpload.2cfe71e4.js";import{n as K}from"./utils.27234e1d.js";import{I as Y}from"./Image.4a41f1aa.js";function O(t,e,l){const i=t.slice();return i[30]=e[l][0],i[31]=e[l][1],i[33]=l,i}function Q(t,e,l){const i=t.slice();return i[30]=e[l],i[34]=e,i[33]=l,i}function W(t){let e,l;return e=new Re({props:{show_label:t[1],Icon:Y,label:t[2]||"Gallery",disable:typeof t[6].container=="boolean"&&!t[6].container}}),{c(){M(e.$$.fragment)},m(i,o){N(e,i,o),l=!0},p(i,o){const n={};o[0]&2&&(n.show_label=i[1]),o[0]&4&&(n.label=i[2]||"Gallery"),o[0]&64&&(n.disable=typeof i[6].container=="boolean"&&!i[6].container),e.$set(n)},i(i){l||(y(e.$$.fragment,i),l=!0)},o(i){B(e.$$.fragment,i),l=!1},d(i){S(e,i)}}}function Me(t){let e,l,i,o,n,r,a=t[7]!==null&&X(t);const p=[Te,Se],m=[];function g(f,c){return f[10].length===0?0:1}return i=g(t),o=m[i]=p[i](t),{c(){a&&a.c(),e=D(),l=w("div"),o.c(),h(l,"class","overflow-y-auto h-full p-2"),te(()=>t[27].call(l)),A(l,"min-h-[350px]",t[6].height!=="auto"),A(l,"max-h-[55vh]",t[6].height!=="auto"),A(l,"xl:min-h-[450px]",t[6].height!=="auto")},m(f,c){a&&a.m(f,c),C(f,e,c),C(f,l,c),m[i].m(l,null),n=Ie(l,t[27].bind(l)),r=!0},p(f,c){f[7]!==null?a?(a.p(f,c),c[0]&128&&y(a,1)):(a=X(f),a.c(),y(a,1),a.m(e.parentNode,e)):a&&(E(),B(a,1,1,()=>{a=null}),H());let z=i;i=g(f),i===z?m[i].p(f,c):(E(),B(m[z],1,1,()=>{m[z]=null}),H(),o=m[i],o?o.p(f,c):(o=m[i]=p[i](f),o.c()),y(o,1),o.m(l,null)),c[0]&64&&A(l,"min-h-[350px]",f[6].height!=="auto"),c[0]&64&&A(l,"max-h-[55vh]",f[6].height!=="auto"),c[0]&64&&A(l,"xl:min-h-[450px]",f[6].height!=="auto")},i(f){r||(y(a),y(o),r=!0)},o(f){B(a),B(o),r=!1},d(f){a&&a.d(f),f&&G(e),f&&G(l),m[i].d(),n()}}}function Ne(t){let e,l,i,o;return i=new Y({}),{c(){e=w("div"),l=w("div"),M(i.$$.fragment),h(l,"class","h-5 dark:text-white opacity-50"),h(e,"class","h-full min-h-[15rem] flex justify-center items-center")},m(n,r){C(n,e,r),j(e,l),N(i,l,null),o=!0},p:U,i(n){o||(y(i.$$.fragment,n),o=!0)},o(n){B(i.$$.fragment,n),o=!1},d(n){n&&G(e),S(i)}}}function X(t){let e,l,i,o,n,r,a,p,m,g,f,c,z;l=new Le({}),l.$on("clear",t[21]);let s=t[10][t[7]][1]&&$(t),b=t[10],k=[];for(let u=0;ut[23](e,p),c=()=>t[23](null,p);function z(){return t[24](t[33])}return{c(){e=w("button"),l=w("img"),r=D(),h(l,"class","h-full w-full overflow-hidden object-contain"),L(l.src,i=t[30][0].data)||h(l,"src",i),h(l,"title",o=t[30][1]||null),h(l,"alt",n=t[30][1]||null),h(e,"class",a="gallery-item !flex-none !h-9 !w-9 transition-all duration-75 "+(t[7]===t[33]?"!ring-2 !ring-orange-500 hover:!ring-orange-500":"scale-90 transform")+" svelte-1g9btlg")},m(s,b){C(s,e,b),j(e,l),j(e,r),f(),m||(g=T(e,"click",z),m=!0)},p(s,b){t=s,b[0]&1024&&!L(l.src,i=t[30][0].data)&&h(l,"src",i),b[0]&1024&&o!==(o=t[30][1]||null)&&h(l,"title",o),b[0]&1024&&n!==(n=t[30][1]||null)&&h(l,"alt",n),b[0]&128&&a!==(a="gallery-item !flex-none !h-9 !w-9 transition-all duration-75 "+(t[7]===t[33]?"!ring-2 !ring-orange-500 hover:!ring-orange-500":"scale-90 transform")+" svelte-1g9btlg")&&h(e,"class",a),p!==t[33]&&(c(),p=t[33],f())},d(s){s&&G(e),c(),m=!1,g()}}}function Se(t){let e,l,i=t[10],o=[];for(let n=0;n{g=null}),H());let u=o;o=z(s),o===u?c[o].p(s,b):(E(),B(c[u],1,1,()=>{c[u]=null}),H(),n=c[o],n?n.p(s,b):(n=c[o]=f[o](s),n.c()),y(n,1),n.m(r.parentNode,r))},i(s){a||(y(e.$$.fragment,s),y(g),y(n),a=!0)},o(s){B(e.$$.fragment,s),B(g),B(n),a=!1},d(s){S(e,s),s&&G(l),g&&g.d(s),s&&G(i),c[o].d(s),s&&G(r)}}}function Ee(t){let e,l,i,o;return te(t[20]),e=new we({props:{visible:t[4],variant:"solid",color:"grey",padding:!1,elem_id:t[3],disable:typeof t[6].container=="boolean"&&!t[6].container,$$slots:{default:[qe]},$$scope:{ctx:t}}}),{c(){M(e.$$.fragment)},m(n,r){N(e,n,r),l=!0,i||(o=T(window,"resize",t[20]),i=!0)},p(n,r){const a={};r[0]&16&&(a.visible=n[4]),r[0]&8&&(a.elem_id=n[3]),r[0]&64&&(a.disable=typeof n[6].container=="boolean"&&!n[6].container),r[0]&64999|r[1]&16&&(a.$$scope={dirty:r,ctx:n}),e.$set(a)},i(n){l||(y(e.$$.fragment,n),l=!0)},o(n){B(e.$$.fragment,n),l=!1},d(n){S(e,n),i=!1,o()}}}function He(t,e,l){let i,o,n,r,a,{loading_status:p}=e,{show_label:m}=e,{label:g}=e,{root:f}=e,{root_url:c}=e,{elem_id:z=""}=e,{visible:s=!0}=e,{value:b=null}=e,{style:k={}}=e,u=null,d=null;function v(_){switch(_.code){case"Escape":_.preventDefault(),l(7,d=null);break;case"ArrowLeft":_.preventDefault(),l(7,d=o);break;case"ArrowRight":_.preventDefault(),l(7,d=n);break}}let I=[],R;async function ae(_){if(typeof _!="number")return;await Ge(),I[_].focus();const{left:P,width:he}=R.getBoundingClientRect(),{left:be,width:de}=I[_].getBoundingClientRect(),Z=be-P+de/2-he/2+R.scrollLeft;R.scrollTo({left:Z<0?0:Z,behavior:"smooth"})}let q=0,V=0;function se(){l(9,V=window.innerHeight)}const re=()=>l(7,d=null),fe=()=>l(7,d=n);function ue(_,P){F[_?"unshift":"push"](()=>{I[P]=_,l(11,I)})}const _e=_=>l(7,d=_);function ce(_){F[_?"unshift":"push"](()=>{R=_,l(12,R)})}const me=_=>l(7,d=r?_:d);function ge(){q=this.clientHeight,l(8,q)}return t.$$set=_=>{"loading_status"in _&&l(0,p=_.loading_status),"show_label"in _&&l(1,m=_.show_label),"label"in _&&l(2,g=_.label),"root"in _&&l(17,f=_.root),"root_url"in _&&l(18,c=_.root_url),"elem_id"in _&&l(3,z=_.elem_id),"visible"in _&&l(4,s=_.visible),"value"in _&&l(5,b=_.value),"style"in _&&l(6,k=_.style)},t.$$.update=()=>{t.$$.dirty[0]&393248&&l(10,i=b===null?null:b.map(_=>Array.isArray(_)?[K(_[0],c??f),_[1]]:[K(_,c??f),null])),t.$$.dirty[0]&524320&&u!==b&&(l(7,d=null),l(19,u=b)),t.$$.dirty[0]&1152&&(o=((d??0)+(i?.length??0)-1)%(i?.length??0)),t.$$.dirty[0]&1152&&l(15,n=((d??0)+1)%(i?.length??0)),t.$$.dirty[0]&128&&ae(d),t.$$.dirty[0]&768&&l(14,r=V>=q),t.$$.dirty[0]&64&&l(13,{classes:a}=ye(k,["grid"]),a)},[p,m,g,z,s,b,k,d,q,V,i,I,R,a,r,n,v,f,c,u,se,re,fe,ue,_e,ce,me,ge]}class Ue extends pe{constructor(e){super(),ke(this,e,He,Ee,ve,{loading_status:0,show_label:1,label:2,root:17,root_url:18,elem_id:3,visible:4,value:5,style:6},null,[-1,-1])}}var Ke=Ue;const Oe=["static"],Qe=t=>({type:"Array<{ name: string } | [{ name: string }, string]>",description:"list of objects with filename and optional caption"});export{Ke as Component,Qe as document,Oe as modes}; -//# sourceMappingURL=index.50b5507a.js.map diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/utils/gradio_utils.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/utils/gradio_utils.py deleted file mode 100644 index cbd7b5113b5a4dea0ca5465e8a19ad1d5c8f829a..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/utils/gradio_utils.py +++ /dev/null @@ -1,487 +0,0 @@ -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import logging - -import gradio as gr -import numpy as np -import pandas as pd -from matplotlib.figure import Figure -import seaborn as sns -import statistics -import streamlit as st -import utils -import utils.dataset_utils as ds_utils -#from st_aggrid import AgGrid, GridOptionsBuilder --> commenting out to fix local build error -from utils.dataset_utils import HF_DESC_FIELD, HF_FEATURE_FIELD, HF_LABEL_FIELD - -logs = utils.prepare_logging(__file__) -st.set_option('deprecation.showPyplotGlobalUse', False) - -# Note: Make sure to consider colorblind-friendly colors for your images! Ex: -# ["#332288", "#117733", "#882255", "#AA4499", "#CC6677", "#44AA99", "#DDCC77", -# "#88CCEE"] - -pd.options.display.float_format = "{:,.3f}".format # '{:20,.2f}'.format - -def subheader(): - gr.Markdown("""This demo showcases the - [dataset metrics as we develop them](https://huggingface.co/blog/data-measurements-tool). - Right now this has: - - dynamic loading of datasets in the lib - - fetching config and info without downloading the dataset - - propose the list of candidate text and label features to select. - """) - - -def get_label_names(dataset_name: str, config_name: str, ds_name_to_dict): - label_field, label_names = ( - ds_name_to_dict[dataset_name][config_name][HF_FEATURE_FIELD][ - HF_LABEL_FIELD][0] - if len( - ds_name_to_dict[dataset_name][config_name][HF_FEATURE_FIELD][ - HF_LABEL_FIELD] - ) > 0 - else ((), []) - ) - return label_field, label_names - - - -def update_dataset(dataset_name: str, ds_name_to_dict): - # choose a config to analyze - ds_configs = ds_name_to_dict[dataset_name] - # special handling for the largest-by-far dataset, C4 - if dataset_name == "c4": - config_names = ['en', 'en.noblocklist', 'realnewslike'] - else: - config_names = list(ds_configs.keys()) - - config_name = config_names[0] - ds_config = ds_configs[config_name] - - text_features = ds_config[HF_FEATURE_FIELD]["string"] - text_features = [('text',)] if dataset_name == "c4" else [tp for tp in text_features if tp[0] != "id"] - feature = str(text_features[0]) - text_features = [str(f) for f in text_features] - - avail_splits = list(ds_config["splits"].keys()) - split = avail_splits[0] - - return [(config_names, config_name), (text_features, feature), (avail_splits, split)] - - -def update_config(dataset_name: str, config_name: str, ds_name_to_dict): - ds_config = ds_name_to_dict[dataset_name][config_name] - - text_features = ds_config[HF_FEATURE_FIELD]["string"] - text_features = [('text',)] if dataset_name == "c4" else [tp for tp in text_features if tp[0] != "id"] - feature = str(text_features[0]) - text_features = [str(f) for f in text_features] - - avail_splits = list(ds_config["splits"].keys()) - split = avail_splits[0] - - return [(text_features, feature), (avail_splits, split)] - - -def sidebar_selection(ds_name_to_dict, column_id=""): - ds_names = list(ds_name_to_dict.keys()) - with gr.Accordion(f"Choose dataset and field {column_id}", open=True): - subheader() - # choose a dataset to analyze - ds_name = gr.Dropdown( - label=f"Choose dataset to explore{column_id}:", - choices=ds_names, - value="hate_speech18", - ) - # choose a config to analyze - ds_configs = ds_name_to_dict[ds_name.value] - # special handling for the largest-by-far dataset, C4 - if ds_name == "c4": - config_names = ['en', 'en.noblocklist', 'realnewslike'] - else: - config_names = list(ds_configs.keys()) - config_name = gr.Dropdown( - label=f"Choose configuration{column_id}:", - choices=config_names, - value=config_names[0], - ) - # choose a subset of num_examples - ds_config = ds_configs[config_name.value] - text_features = ds_config[HF_FEATURE_FIELD]["string"] - # TODO @yacine: Explain what this is doing and why eg tp[0] could = "id" - text = f"Which text feature from the {column_id} dataset would you like to analyze?" - choices = [('text',)] if ds_name == "c4" else [tp for tp in text_features if tp[0] != "id"] - text_field = gr.Dropdown( - label=text, - choices=[str(f) for f in choices], - value=str(choices[0]) - ) - # Choose a split and dataset size - avail_splits = list(ds_config["splits"].keys()) - # 12.Nov note: Removing "test" because those should not be examined - # without discussion of pros and cons, which we haven't done yet. - if "test" in avail_splits: - avail_splits.remove("test") - split = gr.Dropdown( - label=f"Which split from the{column_id} dataset would you like to analyze?", - choices=avail_splits, - value=avail_splits[0], - ) - label_field, label_names = get_label_names(ds_name.value, config_name.value, ds_name_to_dict) - calculate_btn = gr.Button(value="Calculate", variant="primary") - return { - "dset_name": ds_name, - "dset_config": config_name, - "split_name": split, - "text_field": text_field, - "label_field": label_field, - "label_names": label_names, - "calculate_btn": calculate_btn - } - - -def expander_header(dstats, ds_name_to_dict, column_id=""): - with st.expander(f"Dataset Description{column_id}"): - st.markdown( - ds_name_to_dict[dstats.dset_name][dstats.dset_config][HF_DESC_FIELD] - ) - st.dataframe(dstats.dset_peek) - - -def expander_general_stats(dstats, column_id=""): - with gr.Accordion(f"General Text Statistics{column_id}"): - st.caption( - "Use this widget to check whether the terms you see most " - "represented in the dataset make sense for the goals of the dataset." - ) - st.markdown("There are {0} total words".format(str(dstats.total_words))) - st.markdown( - "There are {0} words after removing closed " - "class words".format(str(dstats.total_open_words)) - ) - st.markdown( - "The most common " - "[open class words](https://dictionary.apa.org/open-class-words) " - "and their counts are: " - ) - st.dataframe(dstats.sorted_top_vocab_df) - st.markdown( - "There are {0} missing values in the dataset.".format( - str(dstats.text_nan_count) - ) - ) - if dstats.dups_frac > 0: - st.markdown( - "The dataset is {0}% duplicates. " - "For more information about the duplicates, " - "click the 'Duplicates' tab below.".format( - str(round(dstats.dups_frac * 100, 2))) - ) - else: - st.markdown("There are 0 duplicate items in the dataset. ") - - -def expander_label_distribution(dstats, column_id=""): - with st.expander(f"Label Distribution{column_id}", expanded=False): - st.caption( - "Use this widget to see how balanced the labels in your dataset are." - ) - if dstats.fig_labels: - st.plotly_chart(dstats.fig_labels, use_container_width=True) - else: - st.markdown("No labels were found in the dataset") - - -def expander_text_lengths(dstats, column_id=""): - _TEXT_LENGTH_CAPTION = ( - "Use this widget to identify outliers, particularly suspiciously long " - "outliers." - ) - with st.expander(f"Text Lengths{column_id}", expanded=False): - st.caption(_TEXT_LENGTH_CAPTION) - st.markdown( - "Below, you can see how the lengths of the text instances in your " - "dataset are distributed." - ) - st.markdown( - "Any unexpected peaks or valleys in the distribution may help to " - "identify instances you want to remove or augment." - ) - st.markdown( - "### Here is the count of different text lengths in " - "your dataset:" - ) - # When matplotlib first creates this, it's a Figure. - # Once it's saved, then read back in, - # it's an ndarray that must be displayed using st.image - # (I know, lame). - if isinstance(dstats.length_obj.fig_lengths, Figure): - st.pyplot(dstats.length_obj.fig_lengths, use_container_width=True) - else: - try: - st.image(dstats.length_obj.fig_lengths) - except Exception as e: - logs.exception("Hit exception for lengths figure:") - logs.exception(e) - st.markdown( - "The average length of text instances is **" - + str(round(dstats.length_obj.avg_length, 2)) - + " words**, with a standard deviation of **" - + str(round(dstats.length_obj.std_length, 2)) - + "**." - ) - if dstats.length_obj.lengths_df is not None: - start_id_show_lengths = st.selectbox( - "Show examples of length:", - np.sort(dstats.length_obj.lengths_df["length"].unique())[::-1].tolist(), - key=f"select_show_length_{column_id}", - ) - st.table( - dstats.length_obj.lengths_df[ - dstats.length_obj.lengths_df["length"] == start_id_show_lengths - ].set_index("length") - ) - - -def expander_text_duplicates(dstats, column_id=""): - with st.expander(f"Text Duplicates{column_id}", expanded=False): - st.caption( - "Use this widget to identify text strings that appear more than " - "once." - ) - st.markdown( - "A model's training and testing may be negatively affected by " - "unwarranted duplicates " - "([Lee et al., 2021](https://arxiv.org/abs/2107.06499))." - ) - st.markdown("------") - st.write( - "### Here is the list of all the duplicated items and their counts " - "in the dataset." - ) - if not dstats.duplicates_results: - st.write("There are no duplicates in this dataset! 🥳") - else: - st.write("The fraction of the data that is a duplicate is:") - st.write(str(round(dstats.dups_frac, 4))) - # TODO: Check if this is slow when the size is large -- - # Should we store as dataframes? - # Dataframes allow this to be interactive. - st.dataframe(ds_utils.counter_dict_to_df(dstats.dups_dict)) - - -def expander_text_perplexities(dstats, column_id=""): - with st.expander(f"Text Perplexities{column_id}", expanded=False): - st.caption( - "Use this widget to identify text perplexities from GPT-2." - ) - st.markdown( - """ - Outlier perplexities, especially very high values, could highlight - an issue with an example. Smaller variations should be interpreted - with more care, as they indicate how similar to the GPT-2 training - corpus the examples are rather than being reflective of general - linguistic properties. - For more information on GPT-2, - see its [model card](https://hf.co/gpt2). - """ - ) - st.markdown("------") - st.write( - "### Here is the list of the examples in the dataset, sorted by " - "GPT-2 perplexity:" - ) - if dstats.perplexities_df is None or dstats.perplexities_df.empty: - st.write( - "Perplexities have not been computed yet for this dataset, or " - "this dataset is too large for the UI (> 1,000,000 examples).") - else: - st.dataframe(dstats.perplexities_df.reset_index(drop=True)) - - -def expander_npmi_description(min_vocab): - _NPMI_CAPTION = ( - "Use this widget to identify problematic biases and stereotypes in " - "your data." - ) - _NPMI_CAPTION1 = """ - nPMI scores for a word help to identify potentially - problematic associations, ranked by how close the association is.""" - _NPMI_CAPTION2 = """ - nPMI bias scores for paired words help to identify how word - associations are skewed between the selected selected words - ([Aka et al., 2021](https://arxiv.org/abs/2103.03417)). - """ - - st.caption(_NPMI_CAPTION) - st.markdown(_NPMI_CAPTION1) - st.markdown(_NPMI_CAPTION2) - st.markdown(" ") - st.markdown( - "You can select from gender and sexual orientation " - "identity terms that appear in the dataset at least %s " - "times." % min_vocab - ) - st.markdown( - "The resulting ranked words are those that co-occur with both " - "identity terms. " - ) - st.markdown( - "The more *positive* the score, the more associated the word is with " - "the first identity term. " - "The more *negative* the score, the more associated the word is with " - "the second identity term." - ) - - -def expander_zipf(dstats, column_id=""): - z = dstats.z - zipf_fig = dstats.zipf_fig - with st.expander( - f"Vocabulary Distribution{column_id}: Zipf's Law Fit", expanded=False - ): - try: - _ZIPF_CAPTION = """This shows how close the observed language is to an ideal - natural language distribution following [Zipf's law](https://en.wikipedia.org/wiki/Zipf%27s_law), - calculated by minimizing the [Kolmogorov-Smirnov (KS) statistic](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test).""" - - powerlaw_eq = r"""p(x) \propto x^{- \alpha}""" - zipf_summary = ( - "The optimal alpha based on this dataset is: **" - + str(round(z.alpha, 2)) - + "**, with a KS distance of: **" - + str(round(z.ks_distance, 2)) - ) - zipf_summary += ( - "**. This was fit with a minimum rank value of: **" - + str(int(z.xmin)) - + "**, which is the optimal rank *beyond which* the scaling regime of the power law fits best." - ) - - alpha_warning = "Your alpha value is a bit on the high side, which means that the distribution over words in this dataset is a bit unnatural. This could be due to non-language items throughout the dataset." - xmin_warning = "The minimum rank for this fit is a bit on the high side, which means that the frequencies of your most common words aren't distributed as would be expected by Zipf's law." - fit_results_table = pd.DataFrame.from_dict( - { - r"Alpha:": [str("%.2f" % z.alpha)], - "KS distance:": [str("%.2f" % z.ks_distance)], - "Min rank:": [str("%s" % int(z.xmin))], - }, - columns=["Results"], - orient="index", - ) - fit_results_table.index.name = column_id - st.caption( - "Use this widget for the counts of different words in your dataset, measuring the difference between the observed count and the expected count under Zipf's law." - ) - st.markdown(_ZIPF_CAPTION) - st.write( - """ - A Zipfian distribution follows the power law: $p(x) \propto x^{-α}$ - with an ideal α value of 1.""" - ) - st.markdown( - "In general, an alpha greater than 2 or a minimum rank greater than 10 (take with a grain of salt) means that your distribution is relativaly _unnatural_ for natural language. This can be a sign of mixed artefacts in the dataset, such as HTML markup." - ) - st.markdown( - "Below, you can see the counts of each word in your dataset vs. the expected number of counts following a Zipfian distribution." - ) - st.markdown("-----") - st.write("### Here is your dataset's Zipf results:") - st.dataframe(fit_results_table) - st.write(zipf_summary) - # TODO: Nice UI version of the content in the comments. - # st.markdown("\nThe KS test p-value is < %.2f" % z.ks_test.pvalue) - # if z.ks_test.pvalue < 0.01: - # st.markdown( - # "\n Great news! Your data fits a powerlaw with a minimum KS " "distance of %.4f" % z.distance) - # else: - # st.markdown("\n Sadly, your data does not fit a powerlaw. =(") - # st.markdown("Checking the goodness of fit of our observed distribution") - # st.markdown("to the hypothesized power law distribution") - # st.markdown("using a Kolmogorov–Smirnov (KS) test.") - st.plotly_chart(zipf_fig, use_container_width=True) - if z.alpha > 2: - st.markdown(alpha_warning) - if z.xmin > 5: - st.markdown(xmin_warning) - except: - st.write("Under construction!") - - -def npmi_widget(dstats, column_id=""): - """ - Part of the UI, but providing for interaction. - :param column_id: - :param dstats: - :return: - """ - min_vocab = dstats.min_vocab_count - npmi_stats = dstats.npmi_obj - available_terms = npmi_stats.avail_identity_terms - with st.expander(f"Word Association{column_id}: nPMI", expanded=False): - if npmi_stats and len(available_terms) > 0: - expander_npmi_description(min_vocab) - st.markdown("-----") - term1 = st.selectbox( - f"What is the first term you want to select?{column_id}", - available_terms, - ) - term2 = st.selectbox( - f"What is the second term you want to select?{column_id}", - reversed(available_terms), - ) - try: - joint_npmi_df = npmi_stats.get_display(term1, term2) - npmi_show(joint_npmi_df) - except Exception as e: - logs.exception(e) - st.markdown( - "**WARNING!** The nPMI for these terms has not been" - " pre-computed, please re-run caching." - ) - else: - st.markdown("No words found co-occurring with both of the selected identity" - " terms.") - - -def npmi_show(paired_results): - if paired_results.empty: - st.markdown( - "No words that co-occur enough times for results! Or there's a 🐛." - " Or we're still computing this one. 🤷") - else: - logs.debug("Results to be shown in streamlit are") - logs.debug(paired_results) - s = pd.DataFrame( - paired_results.sort_values(paired_results.columns[0], ascending=True)) - s.index.name = "word" - bias_col = s.filter(like="bias").columns - #count_cols = s.filter(like="count").columns - # Keep the dataframe from being crazy big. - if s.shape[0] > 10000: - bias_thres = max(abs(s[s[0]][5000]), - abs(s[s[0]][-5000])) - logs.info(f"filtering with bias threshold: {bias_thres}") - s_filtered = s[s[0].abs() > bias_thres] - else: - s_filtered = s - cm = sns.palplot(sns.diverging_palette(270, 36, s=99, l=48, n=16)) - out_df = s_filtered.style.background_gradient(subset=bias_col, cmap=cm).format(formatter="{:,.3f}").set_properties(**{"align": "center", "width":"100em"}).set_caption("nPMI scores between the selected identity terms and the words they both co-occur with") - #set_properties(subset=count_cols, **{"width": "10em", "text-align": "center"}). - # .format(subset=count_cols, formatter=int). - #.format(subset=bias_col, formatter="{:,.3f}") - st.write("### Here is your dataset's bias results:") - st.dataframe(out_df) diff --git a/spaces/ICML2022/OFA/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py b/spaces/ICML2022/OFA/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py deleted file mode 100644 index 079db13e61c5ef46d1b1d288012145148eb0be04..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import label_smoothed_nll_loss - - -@register_criterion("label_smoothed_cross_entropy_r3f") -class LabelSmoothedCrossEntropyR3FCriterion(FairseqCriterion): - def __init__( - self, task, sentence_avg, label_smoothing, eps, r3f_lambda, noise_type - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.label_smoothing = label_smoothing - self.eps = eps - self.r3f_lambda = r3f_lambda - self.noise_type = noise_type - if self.noise_type in {"normal"}: - self.noise_sampler = torch.distributions.normal.Normal( - loc=0.0, scale=self.eps - ) - elif self.noise_type == "uniform": - self.noise_sampler = torch.distributions.uniform.Uniform( - low=-self.eps, high=self.eps - ) - else: - raise Exception(f"unrecognized noise type {self.noise_type}") - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--label-smoothing', default=0., type=float, metavar='D', - help='epsilon for label smoothing, 0 means no label smoothing') - parser.add_argument('--eps', type=float, default=1e-5, - help='noise eps') - parser.add_argument('--r3f-lambda', type=float, default=1.0, - help='lambda for combining logistic loss and noisy KL loss') - parser.add_argument('--noise-type', type=str, default='normal', - choices=['normal', 'uniform'], - help='type of noises') - # fmt: on - - def _get_symm_kl(self, noised_logits, input_logits): - return ( - F.kl_div( - F.log_softmax(noised_logits, dim=-1, dtype=torch.float32), - F.softmax(input_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - + F.kl_div( - F.log_softmax(input_logits, dim=-1, dtype=torch.float32), - F.softmax(noised_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - ) / noised_logits.size(0) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - token_embeddings = model.encoder.embed_tokens(sample["net_input"]["src_tokens"]) - input_logits, extra = model(**sample["net_input"]) - loss, nll_loss = self.compute_loss( - model, (input_logits, extra), sample, reduce=reduce - ) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - - if model.training: - noise = self.noise_sampler.sample(sample_shape=token_embeddings.shape).to( - token_embeddings - ) - noised_embeddings = token_embeddings.clone() + noise - - noised_logits, _ = model( - **sample["net_input"], token_embeddings=noised_embeddings - ) - symm_kl = self._get_symm_kl(noised_logits, input_logits) - - if model.training: - symm_kl = symm_kl * sample_size - loss = loss + self.r3f_lambda * symm_kl - - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - - if model.training: - logging_output.update( - symm_kl=utils.item(symm_kl.data) if reduce else symm_kl.data - ) - - return loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.view(-1, lprobs.size(-1)) - target = model.get_targets(sample, net_output).view(-1, 1) - loss, nll_loss = label_smoothed_nll_loss( - lprobs, - target, - self.label_smoothing, - ignore_index=self.padding_idx, - reduce=reduce, - ) - return loss, nll_loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - symm_kl_sum = sum(log.get("symm_kl", 0) for log in logging_outputs) - - metrics.log_scalar("symm_kl", symm_kl_sum / sample_size, sample_size, round=3) - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/README.md b/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/README.md deleted file mode 100644 index 62a005e0ec6f15af9015d335e34b45df6ed89b6c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Simultaneous Translation -Examples of simultaneous translation in fairseq -- [English-to-Japanese text-to-text wait-k model](docs/enja-waitk.md) -- [English-to-Germen text-to-text monotonic multihead attention model](docs/ende-mma.md) -- [English-to-Germen speech-to-text simultaneous translation model](../speech_to_text/docs/simulst_mustc_example.md) diff --git a/spaces/ICML2022/OFA/fairseq/examples/unsupervised_quality_estimation/README.md b/spaces/ICML2022/OFA/fairseq/examples/unsupervised_quality_estimation/README.md deleted file mode 100644 index e86a0d13b883af0c37fdc2c1fee9b0b9dff4d18c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/unsupervised_quality_estimation/README.md +++ /dev/null @@ -1,126 +0,0 @@ -# Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020) - -This page includes instructions for reproducing results from the paper [Unsupervised Quality Estimation for Neural -Machine Translation (Fomicheva et al., 2020)](https://arxiv.org/abs/2005.10608) - -## Requirements: - -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* subword-nmt: https://github.com/rsennrich/subword-nmt -* flores: https://github.com/facebookresearch/flores - -## Download Models and Test Data - -Download translation models and test data from [MLQE dataset repository](https://github.com/facebookresearch/mlqe). - -## Set up: - -Given a testset consisting of source sentences and reference translations: - -* `SRC_LANG`: source language -* `TGT_LANG`: target language -* `INPUT`: input prefix, such that the file `$INPUT.$SRC_LANG` contains source sentences and `$INPUT.$TGT_LANG` -contains the reference sentences -* `OUTPUT_DIR`: output path to store results -* `MOSES_DECODER`: path to mosesdecoder installation -* `BPE_ROOT`: path to subword-nmt installation -* `BPE`: path to BPE model -* `MODEL_DIR`: directory containing the NMT model `.pt` file as well as the source and target vocabularies. -* `TMP`: directory for intermediate temporary files -* `GPU`: if translating with GPU, id of the GPU to use for inference -* `DROPOUT_N`: number of stochastic forward passes - -`$DROPOUT_N` is set to 30 in the experiments reported in the paper. However, we observed that increasing it beyond 10 -does not bring substantial improvements. - -## Translate the data using standard decoding - -Preprocess the input data: -``` -for LANG in $SRC_LANG $TGT_LANG; do - perl $MOSES_DECODER/scripts/tokenizer/tokenizer.perl -threads 80 -a -l $LANG < $INPUT.$LANG > $TMP/preprocessed.tok.$LANG - python $BPE_ROOT/apply_bpe.py -c ${BPE} < $TMP/preprocessed.tok.$LANG > $TMP/preprocessed.tok.bpe.$LANG -done -``` - -Binarize the data for faster translation: - -``` -fairseq-preprocess --srcdict $MODEL_DIR/dict.$SRC_LANG.txt --tgtdict $MODEL_DIR/dict.$TGT_LANG.txt ---source-lang ${SRC_LANG} --target-lang ${TGT_LANG} --testpref $TMP/preprocessed.tok.bpe --destdir $TMP/bin --workers 4 -``` - -Translate - -``` -CUDA_VISIBLE_DEVICES=$GPU fairseq-generate $TMP/bin --path ${MODEL_DIR}/${SRC_LANG}-${TGT_LANG}.pt --beam 5 ---source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 > $TMP/fairseq.out -grep ^H $TMP/fairseq.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/mt.out -``` - -Post-process - -``` -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/mt.out | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $OUTPUT_DIR/mt.out -``` - -## Produce uncertainty estimates - -### Scoring - -Make temporary files to store the translations repeated N times. - -``` -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/preprocessed.tok.bpe.$SRC_LANG -n $DROPOUT_N --o $TMP/repeated.$SRC_LANG -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/mt.out -n $DROPOUT_N -o $TMP/repeated.$TGT_LANG - -fairseq-preprocess --srcdict ${MODEL_DIR}/dict.${SRC_LANG}.txt $TGT_DIC --source-lang ${SRC_LANG} ---target-lang ${TGT_LANG} --testpref ${TMP}/repeated --destdir ${TMP}/bin-repeated -``` - -Produce model scores for the generated translations using `--retain-dropout` option to apply dropout at inference time: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt --beam 5 - --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 --score-reference --retain-dropout - --retain-dropout-modules '["TransformerModel","TransformerEncoder","TransformerDecoder","TransformerEncoderLayer"]' - TransformerDecoderLayer --seed 46 > $TMP/dropout.scoring.out - -grep ^H $TMP/dropout.scoring.out | cut -d- -f2- | sort -n | cut -f2 > $TMP/dropout.scores - -``` - -Use `--retain-dropout-modules` to specify the modules. By default, dropout is applied in the same places -as for training. - -Compute the mean of the resulting output distribution: - -``` -python $SCRIPTS/scripts/uncertainty/aggregate_scores.py -i $TMP/dropout.scores -o $OUTPUT_DIR/dropout.scores.mean --n $DROPOUT_N -``` - -### Generation - -Produce multiple translation hypotheses for the same source using `--retain-dropout` option: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt - --beam 5 --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --retain-dropout - --unkpen 5 --retain-dropout-modules TransformerModel TransformerEncoder TransformerDecoder -TransformerEncoderLayer TransformerDecoderLayer --seed 46 > $TMP/dropout.generation.out - -grep ^H $TMP/dropout.generation.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/dropout.hypotheses_ - -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/dropout.hypotheses_ | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $TMP/dropout.hypotheses -``` - -Compute similarity between multiple hypotheses corresponding to the same source sentence using Meteor -evaluation metric: -``` -python meteor.py -i $TMP/dropout.hypotheses -m -n $DROPOUT_N -o -$OUTPUT_DIR/dropout.gen.sim.meteor -``` diff --git a/spaces/Illumotion/Koboldcpp/include/CL/cl2.hpp b/spaces/Illumotion/Koboldcpp/include/CL/cl2.hpp deleted file mode 100644 index a332962a8fd9f236071444d9e704faae6f8b18b9..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/CL/cl2.hpp +++ /dev/null @@ -1,18 +0,0 @@ -// -// Copyright (c) 2020 The Khronos Group Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// - -#include -#pragma message("cl2.hpp has been renamed to opencl.hpp to make it clear that it supports all versions of OpenCL. Please include opencl.hpp directly.") diff --git a/spaces/Jackflack09/diffuse-custom/Waifu2x/Models.py b/spaces/Jackflack09/diffuse-custom/Waifu2x/Models.py deleted file mode 100644 index 7e1d861f4908fbcb3912ff2f75d185ee33b39eb7..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/Waifu2x/Models.py +++ /dev/null @@ -1,316 +0,0 @@ -import json -from collections import OrderedDict -from math import exp - -from .Common import * - - -# +++++++++++++++++++++++++++++++++++++ -# FP16 Training -# ------------------------------------- -# Modified from Nvidia/Apex -# https://github.com/NVIDIA/apex/blob/master/apex/fp16_utils/fp16util.py - -class tofp16(nn.Module): - def __init__(self): - super(tofp16, self).__init__() - - def forward(self, input): - if input.is_cuda: - return input.half() - else: # PyTorch 1.0 doesn't support fp16 in CPU - return input.float() - - -def BN_convert_float(module): - if isinstance(module, torch.nn.modules.batchnorm._BatchNorm): - module.float() - for child in module.children(): - BN_convert_float(child) - return module - - -def network_to_half(network): - return nn.Sequential(tofp16(), BN_convert_float(network.half())) - - -# warnings.simplefilter('ignore') - -# +++++++++++++++++++++++++++++++++++++ -# DCSCN -# ------------------------------------- - -class DCSCN(BaseModule): - # https://github.com/jiny2001/dcscn-super-resolution - def __init__(self, - color_channel=3, - up_scale=2, - feature_layers=12, - first_feature_filters=196, - last_feature_filters=48, - reconstruction_filters=128, - up_sampler_filters=32 - ): - super(DCSCN, self).__init__() - self.total_feature_channels = 0 - self.total_reconstruct_filters = 0 - self.upscale = up_scale - - self.act_fn = nn.SELU(inplace=False) - self.feature_block = self.make_feature_extraction_block(color_channel, - feature_layers, - first_feature_filters, - last_feature_filters) - - self.reconstruction_block = self.make_reconstruction_block(reconstruction_filters) - self.up_sampler = self.make_upsampler(up_sampler_filters, color_channel) - self.selu_init_params() - - def selu_init_params(self): - for i in self.modules(): - if isinstance(i, nn.Conv2d): - i.weight.data.normal_(0.0, 1.0 / sqrt(i.weight.numel())) - if i.bias is not None: - i.bias.data.fill_(0) - - def conv_block(self, in_channel, out_channel, kernel_size): - m = OrderedDict([ - # ("Padding", nn.ReplicationPad2d((kernel_size - 1) // 2)), - ('Conv2d', nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, padding=(kernel_size - 1) // 2)), - ('Activation', self.act_fn) - ]) - - return nn.Sequential(m) - - def make_feature_extraction_block(self, color_channel, num_layers, first_filters, last_filters): - # input layer - feature_block = [("Feature 1", self.conv_block(color_channel, first_filters, 3))] - # exponential decay - # rest layers - alpha_rate = log(first_filters / last_filters) / (num_layers - 1) - filter_nums = [round(first_filters * exp(-alpha_rate * i)) for i in range(num_layers)] - - self.total_feature_channels = sum(filter_nums) - - layer_filters = [[filter_nums[i], filter_nums[i + 1], 3] for i in range(num_layers - 1)] - - feature_block.extend([("Feature {}".format(index + 2), self.conv_block(*x)) - for index, x in enumerate(layer_filters)]) - return nn.Sequential(OrderedDict(feature_block)) - - def make_reconstruction_block(self, num_filters): - B1 = self.conv_block(self.total_feature_channels, num_filters // 2, 1) - B2 = self.conv_block(num_filters // 2, num_filters, 3) - m = OrderedDict([ - ("A", self.conv_block(self.total_feature_channels, num_filters, 1)), - ("B", nn.Sequential(*[B1, B2])) - ]) - self.total_reconstruct_filters = num_filters * 2 - return nn.Sequential(m) - - def make_upsampler(self, out_channel, color_channel): - out = out_channel * self.upscale ** 2 - m = OrderedDict([ - ('Conv2d_block', self.conv_block(self.total_reconstruct_filters, out, kernel_size=3)), - ('PixelShuffle', nn.PixelShuffle(self.upscale)), - ("Conv2d", nn.Conv2d(out_channel, color_channel, kernel_size=3, padding=1, bias=False)) - ]) - - return nn.Sequential(m) - - def forward(self, x): - # residual learning - lr, lr_up = x - feature = [] - for layer in self.feature_block.children(): - lr = layer(lr) - feature.append(lr) - feature = torch.cat(feature, dim=1) - - reconstruction = [layer(feature) for layer in self.reconstruction_block.children()] - reconstruction = torch.cat(reconstruction, dim=1) - - lr = self.up_sampler(reconstruction) - return lr + lr_up - - -# +++++++++++++++++++++++++++++++++++++ -# CARN -# ------------------------------------- - -class CARN_Block(BaseModule): - def __init__(self, channels, kernel_size=3, padding=1, dilation=1, - groups=1, activation=nn.SELU(), repeat=3, - SEBlock=False, conv=nn.Conv2d, - single_conv_size=1, single_conv_group=1): - super(CARN_Block, self).__init__() - m = [] - for i in range(repeat): - m.append(ResidualFixBlock(channels, channels, kernel_size=kernel_size, padding=padding, dilation=dilation, - groups=groups, activation=activation, conv=conv)) - if SEBlock: - m.append(SpatialChannelSqueezeExcitation(channels, reduction=channels)) - self.blocks = nn.Sequential(*m) - self.singles = nn.Sequential( - *[ConvBlock(channels * (i + 2), channels, kernel_size=single_conv_size, - padding=(single_conv_size - 1) // 2, groups=single_conv_group, - activation=activation, conv=conv) - for i in range(repeat)]) - - def forward(self, x): - c0 = x - for block, single in zip(self.blocks, self.singles): - b = block(x) - c0 = c = torch.cat([c0, b], dim=1) - x = single(c) - - return x - - -class CARN(BaseModule): - # Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network - # https://github.com/nmhkahn/CARN-pytorch - def __init__(self, - color_channels=3, - mid_channels=64, - scale=2, - activation=nn.SELU(), - num_blocks=3, - conv=nn.Conv2d): - super(CARN, self).__init__() - - self.color_channels = color_channels - self.mid_channels = mid_channels - self.scale = scale - - self.entry_block = ConvBlock(color_channels, mid_channels, kernel_size=3, padding=1, activation=activation, - conv=conv) - self.blocks = nn.Sequential( - *[CARN_Block(mid_channels, kernel_size=3, padding=1, activation=activation, conv=conv, - single_conv_size=1, single_conv_group=1) - for _ in range(num_blocks)]) - self.singles = nn.Sequential( - *[ConvBlock(mid_channels * (i + 2), mid_channels, kernel_size=1, padding=0, - activation=activation, conv=conv) - for i in range(num_blocks)]) - - self.upsampler = UpSampleBlock(mid_channels, scale=scale, activation=activation, conv=conv) - self.exit_conv = conv(mid_channels, color_channels, kernel_size=3, padding=1) - - def forward(self, x): - x = self.entry_block(x) - c0 = x - for block, single in zip(self.blocks, self.singles): - b = block(x) - c0 = c = torch.cat([c0, b], dim=1) - x = single(c) - x = self.upsampler(x) - out = self.exit_conv(x) - return out - - -class CARN_V2(CARN): - def __init__(self, color_channels=3, mid_channels=64, - scale=2, activation=nn.LeakyReLU(0.1), - SEBlock=True, conv=nn.Conv2d, - atrous=(1, 1, 1), repeat_blocks=3, - single_conv_size=3, single_conv_group=1): - super(CARN_V2, self).__init__(color_channels=color_channels, mid_channels=mid_channels, scale=scale, - activation=activation, conv=conv) - - num_blocks = len(atrous) - m = [] - for i in range(num_blocks): - m.append(CARN_Block(mid_channels, kernel_size=3, padding=1, dilation=1, - activation=activation, SEBlock=SEBlock, conv=conv, repeat=repeat_blocks, - single_conv_size=single_conv_size, single_conv_group=single_conv_group)) - - self.blocks = nn.Sequential(*m) - - self.singles = nn.Sequential( - *[ConvBlock(mid_channels * (i + 2), mid_channels, kernel_size=single_conv_size, - padding=(single_conv_size - 1) // 2, groups=single_conv_group, - activation=activation, conv=conv) - for i in range(num_blocks)]) - - def forward(self, x): - x = self.entry_block(x) - c0 = x - res = x - for block, single in zip(self.blocks, self.singles): - b = block(x) - c0 = c = torch.cat([c0, b], dim=1) - x = single(c) - x = x + res - x = self.upsampler(x) - out = self.exit_conv(x) - return out - - -# +++++++++++++++++++++++++++++++++++++ -# original Waifu2x model -# ------------------------------------- - - -class UpConv_7(BaseModule): - # https://github.com/nagadomi/waifu2x/blob/3c46906cb78895dbd5a25c3705994a1b2e873199/lib/srcnn.lua#L311 - def __init__(self): - super(UpConv_7, self).__init__() - self.act_fn = nn.LeakyReLU(0.1, inplace=False) - self.offset = 7 # because of 0 padding - from torch.nn import ZeroPad2d - self.pad = ZeroPad2d(self.offset) - m = [nn.Conv2d(3, 16, 3, 1, 0), - self.act_fn, - nn.Conv2d(16, 32, 3, 1, 0), - self.act_fn, - nn.Conv2d(32, 64, 3, 1, 0), - self.act_fn, - nn.Conv2d(64, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 256, 3, 1, 0), - self.act_fn, - # in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding= - nn.ConvTranspose2d(256, 3, kernel_size=4, stride=2, padding=3, bias=False) - ] - self.Sequential = nn.Sequential(*m) - - def load_pre_train_weights(self, json_file): - with open(json_file) as f: - weights = json.load(f) - box = [] - for i in weights: - box.append(i['weight']) - box.append(i['bias']) - own_state = self.state_dict() - for index, (name, param) in enumerate(own_state.items()): - own_state[name].copy_(torch.FloatTensor(box[index])) - - def forward(self, x): - x = self.pad(x) - return self.Sequential.forward(x) - - - -class Vgg_7(UpConv_7): - def __init__(self): - super(Vgg_7, self).__init__() - self.act_fn = nn.LeakyReLU(0.1, inplace=False) - self.offset = 7 - m = [nn.Conv2d(3, 32, 3, 1, 0), - self.act_fn, - nn.Conv2d(32, 32, 3, 1, 0), - self.act_fn, - nn.Conv2d(32, 64, 3, 1, 0), - self.act_fn, - nn.Conv2d(64, 64, 3, 1, 0), - self.act_fn, - nn.Conv2d(64, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 128, 3, 1, 0), - self.act_fn, - nn.Conv2d(128, 3, 3, 1, 0) - ] - self.Sequential = nn.Sequential(*m) diff --git a/spaces/Jarvis2301/Aku/app.py b/spaces/Jarvis2301/Aku/app.py deleted file mode 100644 index 204798005947ce3cfed3407c6c42436b8fb2ca26..0000000000000000000000000000000000000000 --- a/spaces/Jarvis2301/Aku/app.py +++ /dev/null @@ -1,150 +0,0 @@ -# coding=utf-8 -import time -import os -import gradio as gr -import utils -import argparse -import commons -from models import SynthesizerTrn -from text import text_to_sequence -import torch -from torch import no_grad, LongTensor -import webbrowser -import logging -import gradio.processing_utils as gr_processing_utils -logging.getLogger('numba').setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - -audio_postprocess_ori = gr.Audio.postprocess -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) -gr.Audio.postprocess = audio_postprocess - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "Teks masukan tidak boleh kosong!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 100 and limitation: - return f"Teks masukan terlalu panjang!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - speaker_id = LongTensor([speaker_id]).to(device) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "Berhasil dibuat!", (22050, audio), f"Membutuhkan waktu untuk menghasilkan {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--colab", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - device = torch.device(args.device) - - hps_ms = utils.get_hparams_from_file(r'./model/config.json') - net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model) - _ = net_g_ms.eval().to(device) - speakers = hps_ms.speakers - model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - - with gr.Blocks() as app: - gr.Markdown( - "#
              Generator Sound Online demo\n" - "#
              Dilarang keras menggunakan model untuk proyek komersial apa pun, jika tidak, Anda akan menanggung konsekuensinya\n" - "
              Ada terutama warna nada dari Saima Niang, Cina Yuanshin, Jepang Yuanshin, dan Honkai 3
              " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation) " if limitation else "Text", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["Cina", "Jepang", "Campuran Cina dan Jepang (Cina dibungkus dengan [ZH][ZH], Jepang dibungkus dengan [JA][JA])"], - type="index", value="Cina") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(mengendalikan perubahan emosi)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(Kontrol panjang pengucapan vokal)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(Kontrol kecepatan bicara secara keseluruhan)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3]) - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("Daftar karakter yang tersedia"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - if args.colab: - webbrowser.open("http://127.0.0.1:7860") - app.queue(concurrency_count=2, api_open=args.api).launch(share=args.share) diff --git a/spaces/JoeStrout/simple-llama-finetuner/README.md b/spaces/JoeStrout/simple-llama-finetuner/README.md deleted file mode 100644 index 2e339bbc63ba8e0b9f39282f539abd8aab2859a4..0000000000000000000000000000000000000000 --- a/spaces/JoeStrout/simple-llama-finetuner/README.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: Simple LLaMA Finetuner -emoji: 🦙 -colorFrom: yellow -colorTo: orange -sdk: gradio -app_file: main.py -pinned: false -duplicated_from: lxe/simple-llama-finetuner ---- - -# 🦙 Simple LLaMA Finetuner - -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb) -[![Open In Spaces](https://img.shields.io/badge/🤗-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/lxe/simple-llama-finetuner) -[![](https://img.shields.io/badge/no-bugs-brightgreen.svg)](https://github.com/lxe/no-bugs) -[![](https://img.shields.io/badge/coverage-%F0%9F%92%AF-green.svg)](https://github.com/lxe/onehundred/tree/master) - -Simple LLaMA Finetuner is a beginner-friendly interface designed to facilitate fine-tuning the [LLaMA-7B](https://github.com/facebookresearch/llama) language model using [LoRA](https://arxiv.org/abs/2106.09685) method via the [PEFT library](https://github.com/huggingface/peft) on commodity NVIDIA GPUs. With small dataset and sample lengths of 256, you can even run this on a regular Colab Tesla T4 instance. - -With this intuitive UI, you can easily manage your dataset, customize parameters, train, and evaluate the model's inference capabilities. - -## Acknowledgements - - - https://github.com/zphang/minimal-llama/ - - https://github.com/tloen/alpaca-lora - - https://github.com/huggingface/peft - - https://huggingface.co/datasets/Anthropic/hh-rlhf - -## Features - -- Simply paste datasets in the UI, separated by double blank lines -- Adjustable parameters for fine-tuning and inference -- Beginner-friendly UI with explanations for each parameter - -## TODO - -- [ ] Accelerate / DeepSpeed -- [ ] Load other models -- [ ] More dataset preparation tools - -## Getting Started - -### Prerequisites - -- Linux or WSL -- Modern NVIDIA GPU with >= 16 GB of VRAM (but it might be possible to run with less for smaller sample lengths) - -### Usage - -I recommend using a virtual environment to install the required packages. Conda preferred. - -``` -conda create -n llama-finetuner python=3.10 -conda activate llama-finetuner -conda install -y cuda -c nvidia/label/cuda-11.7.0 -conda install -y pytorch=1.13.1 pytorch-cuda=11.7 -c pytorch -``` - -On WSL, you might need to install CUDA manually by following [these steps](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local), then running the following before you launch: - -``` -export LD_LIBRARY_PATH=/usr/lib/wsl/lib -``` - -Clone the repository and install the required packages. - -``` -git clone https://github.com/lxe/simple-llama-finetuner.git -cd simple-llama-finetuner -pip install -r requirements.txt -``` - -Launch it - -``` -python main.py -``` - -Open http://127.0.0.1:7860/ in your browser. Prepare your training data by separating each sample with 2 blank lines. Paste the whole training dataset into the textbox. Specify the model name in the "LoRA Model Name" textbox, then click train. You might need to adjust the max sequence length and batch size to fit your GPU memory. The model will be saved in the `lora-{your model name}` directory. - -After training is done, navigate to "Inference" tab, click "Reload Models", select your model, and play with it. - -Have fun! - -## Screenshots - -|![Image1](https://user-images.githubusercontent.com/1486609/226793136-84531388-4081-49bb-b982-3f47e6ec25cd.png) | ![Image2](https://user-images.githubusercontent.com/1486609/226809466-b1eb6f3f-4049-4a41-a2e3-52b06a6e1230.png) | -|:---:|:---:| - -## License - -MIT License - -Copyright (c) 2023 Aleksey Smolenchuk - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/ChuanhuChatbot.py b/spaces/JohnSmith9982/ChuanhuChatGPT/ChuanhuChatbot.py deleted file mode 100644 index d498359af5c02037247406830672bcbbdbb7006b..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/ChuanhuChatbot.py +++ /dev/null @@ -1,559 +0,0 @@ -# -*- coding:utf-8 -*- -import logging -logging.basicConfig( - level=logging.INFO, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -import colorama -import gradio as gr - -from modules import config -from modules.config import * -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.webui import * -from modules.repo import * -from modules.train_func import * -from modules.models.models import get_model - -logging.getLogger("httpx").setLevel(logging.WARNING) - -gr.Chatbot._postprocess_chat_messages = postprocess_chat_messages -gr.Chatbot.postprocess = postprocess - -# with open("web_assets/css/ChuanhuChat.css", "r", encoding="utf-8") as f: -# ChuanhuChatCSS = f.read() - -def create_new_model(): - return get_model(model_name = MODELS[DEFAULT_MODEL], access_key = my_api_key)[0] - -with gr.Blocks(theme=small_and_beautiful_theme) as demo: - user_name = gr.State("") - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_question = gr.State("") - assert type(my_api_key)==str - user_api_key = gr.State(my_api_key) - current_model = gr.State(create_new_model) - - topic = gr.State(i18n("未命名对话历史记录")) - - with gr.Row(): - gr.HTML(CHUANHU_TITLE, elem_id="app-title") - status_display = gr.Markdown(get_geoip(), elem_id="status-display") - with gr.Row(elem_id="float-display"): - user_info = gr.Markdown(value="getting user info...", elem_id="user-info") - config_info = gr.HTML(get_html("config_info.html").format(bot_avatar=config.bot_avatar, user_avatar=config.user_avatar), visible=False, elem_id="config-info") - update_info = gr.HTML(get_html("update.html").format( - current_version=repo_tag_html(), - version_time=version_time(), - cancel_btn=i18n("取消"), - update_btn=i18n("更新"), - seenew_btn=i18n("详情"), - ok_btn=i18n("好"), - ), visible=check_update) - - with gr.Row(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(label="Chuanhu Chat", elem_id="chuanhu-chatbot", latex_delimiters=latex_delimiters_set, height=700) - with gr.Row(): - with gr.Column(min_width=225, scale=12): - user_input = gr.Textbox( - elem_id="user-input-tb", - show_label=False, placeholder=i18n("在这里输入"), - container=False - ) - with gr.Column(min_width=42, scale=1): - submitBtn = gr.Button(value="", variant="primary", elem_id="submit-btn") - cancelBtn = gr.Button(value="", variant="secondary", visible=False, elem_id="cancel-btn") - with gr.Row(elem_id="chatbot-buttons"): - with gr.Column(min_width=120, scale=1): - emptyBtn = gr.Button( - i18n("🧹 新的对话"), elem_id="empty-btn" - ) - with gr.Column(min_width=120, scale=1): - retryBtn = gr.Button(i18n("🔄 重新生成")) - with gr.Column(min_width=120, scale=1): - delFirstBtn = gr.Button(i18n("🗑️ 删除最旧对话")) - with gr.Column(min_width=120, scale=1): - delLastBtn = gr.Button(i18n("🗑️ 删除最新对话")) - with gr.Row(visible=False) as like_dislike_area: - with gr.Column(min_width=20, scale=1): - likeBtn = gr.Button(i18n("👍")) - with gr.Column(min_width=20, scale=1): - dislikeBtn = gr.Button(i18n("👎")) - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label=i18n("模型")): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"Your API-key...", - value=hide_middle_chars(user_api_key.value), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - if multi_api_key: - usageTxt = gr.Markdown(i18n("多账号模式已开启,无需输入key,可直接开始对话"), elem_id="usage-display", elem_classes="insert-block", visible=show_api_billing) - else: - usageTxt = gr.Markdown(i18n("**发送消息** 或 **提交key** 以显示额度"), elem_id="usage-display", elem_classes="insert-block", visible=show_api_billing) - model_select_dropdown = gr.Dropdown( - label=i18n("选择模型"), choices=MODELS, multiselect=False, value=MODELS[DEFAULT_MODEL], interactive=True - ) - lora_select_dropdown = gr.Dropdown( - label=i18n("选择LoRA模型"), choices=[], multiselect=False, interactive=True, visible=False - ) - with gr.Row(): - single_turn_checkbox = gr.Checkbox(label=i18n("单轮对话"), value=False, elem_classes="switch-checkbox") - use_websearch_checkbox = gr.Checkbox(label=i18n("使用在线搜索"), value=False, elem_classes="switch-checkbox") - language_select_dropdown = gr.Dropdown( - label=i18n("选择回复语言(针对搜索&索引功能)"), - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label=i18n("上传"), type="file", elem_id="upload-index-file") - two_column = gr.Checkbox(label=i18n("双栏pdf"), value=advance_docs["pdf"].get("two_column", False)) - summarize_btn = gr.Button(i18n("总结")) - # TODO: 公式ocr - # formula_ocr = gr.Checkbox(label=i18n("识别公式"), value=advance_docs["pdf"].get("formula_ocr", False)) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入System Prompt..."), - label="System prompt", - value=INITIAL_SYSTEM_PROMPT, - lines=10 - ) - with gr.Accordion(label=i18n("加载Prompt模板"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label=i18n("选择Prompt模板集合文件"), - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - container=False, - ) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label=i18n("从Prompt模板中加载"), - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - container=False, - ) - - with gr.Tab(label=i18n("保存/加载")): - with gr.Accordion(label=i18n("保存/加载对话历史记录"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label=i18n("从列表中加载对话"), - choices=get_history_names(plain=True), - multiselect=False, - container=False, - ) - with gr.Row(): - with gr.Column(min_width=42, scale=1): - historyRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Column(min_width=42, scale=1): - historyDeleteBtn = gr.Button(i18n("🗑️ 删除")) - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=i18n("设置文件名: 默认为.json,可选为.md"), - label=i18n("设置保存文件名"), - value=i18n("对话历史记录"), - elem_classes="no-container" - # container=False, - ) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button(i18n("💾 保存对话")) - exportMarkdownBtn = gr.Button(i18n("📝 导出为Markdown")) - gr.Markdown(i18n("默认保存于history文件夹")) - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label=i18n("微调")): - openai_train_status = gr.Markdown(label=i18n("训练状态"), value=i18n("在这里[查看使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E4%BD%BF%E7%94%A8%E6%95%99%E7%A8%8B#%E5%BE%AE%E8%B0%83-gpt-35)")) - - with gr.Tab(label=i18n("准备数据集")): - dataset_preview_json = gr.JSON(label=i18n("数据集预览"), readonly=True) - dataset_selection = gr.Files(label = i18n("选择数据集"), file_types=[".xlsx", ".jsonl"], file_count="single") - upload_to_openai_btn = gr.Button(i18n("上传到OpenAI"), variant="primary", interactive=False) - - with gr.Tab(label=i18n("训练")): - openai_ft_file_id = gr.Textbox(label=i18n("文件ID"), value="", lines=1, placeholder=i18n("上传到 OpenAI 后自动填充")) - openai_ft_suffix = gr.Textbox(label=i18n("模型名称后缀"), value="", lines=1, placeholder=i18n("可选,用于区分不同的模型")) - openai_train_epoch_slider = gr.Slider(label=i18n("训练轮数(Epochs)"), minimum=1, maximum=100, value=3, step=1, interactive=True) - openai_start_train_btn = gr.Button(i18n("开始训练"), variant="primary", interactive=False) - - with gr.Tab(label=i18n("状态")): - openai_status_refresh_btn = gr.Button(i18n("刷新状态")) - openai_cancel_all_jobs_btn = gr.Button(i18n("取消所有任务")) - add_to_models_btn = gr.Button(i18n("添加训练好的模型到模型列表"), interactive=False) - - with gr.Tab(label=i18n("高级")): - gr.HTML(get_html("appearance_switcher.html").format(label=i18n("切换亮暗色主题")), elem_classes="insert-block") - use_streaming_checkbox = gr.Checkbox( - label=i18n("实时传输回答"), value=True, visible=ENABLE_STREAMING_OPTION, elem_classes="switch-checkbox" - ) - checkUpdateBtn = gr.Button(i18n("🔄 检查更新..."), visible=check_update) - gr.Markdown(i18n("# ⚠️ 务必谨慎更改 ⚠️"), elem_id="advanced-warning") - with gr.Accordion(i18n("参数"), open=False): - temperature_slider = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="temperature", - ) - top_p_slider = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="top-p", - ) - n_choices_slider = gr.Slider( - minimum=1, - maximum=10, - value=1, - step=1, - interactive=True, - label="n choices", - ) - stop_sequence_txt = gr.Textbox( - show_label=True, - placeholder=i18n("停止符,用英文逗号隔开..."), - label="stop", - value="", - lines=1, - ) - max_context_length_slider = gr.Slider( - minimum=1, - maximum=32768, - value=2000, - step=1, - interactive=True, - label="max context", - ) - max_generation_slider = gr.Slider( - minimum=1, - maximum=32768, - value=1000, - step=1, - interactive=True, - label="max generations", - ) - presence_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="presence penalty", - ) - frequency_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="frequency penalty", - ) - logit_bias_txt = gr.Textbox( - show_label=True, - placeholder=f"word:likelihood", - label="logit bias", - value="", - lines=1, - ) - user_identifier_txt = gr.Textbox( - show_label=True, - placeholder=i18n("用于定位滥用行为"), - label=i18n("用户名"), - value=user_name.value, - lines=1, - ) - - with gr.Accordion(i18n("网络参数"), open=False): - gr.Markdown(i18n("---\n⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置"), elem_id="netsetting-warning") - default_btn = gr.Button(i18n("🔙 恢复默认网络设置")) - # 网络代理 - proxyTxt = gr.Textbox( - show_label=True, - placeholder=i18n("未设置代理..."), - label=i18n("代理地址"), - value=config.http_proxy, - lines=1, - interactive=False, - # container=False, - elem_classes="view-only-textbox no-container", - ) - # changeProxyBtn = gr.Button(i18n("🔄 设置代理地址")) - - # 优先展示自定义的api_host - apihostTxt = gr.Textbox( - show_label=True, - placeholder="api.openai.com", - label="OpenAI API-Host", - value=config.api_host or shared.API_HOST, - lines=1, - interactive=False, - # container=False, - elem_classes="view-only-textbox no-container", - ) - # changeAPIURLBtn = gr.Button(i18n("🔄 切换API地址")) - updateChuanhuBtn = gr.Button(visible=False, elem_classes="invisible-btn", elem_id="update-chuanhu-btn") - - - gr.Markdown(CHUANHU_DESCRIPTION, elem_id="description") - gr.HTML(get_html("footer.html").format(versions=versions_html()), elem_id="footer") - - # https://github.com/gradio-app/gradio/pull/3296 - def create_greeting(request: gr.Request): - if hasattr(request, "username") and request.username: # is not None or is not "" - logging.info(f"Get User Name: {request.username}") - user_info, user_name = gr.Markdown.update(value=f"User: {request.username}"), request.username - else: - user_info, user_name = gr.Markdown.update(value=f"", visible=False), "" - current_model = get_model(model_name = MODELS[DEFAULT_MODEL], access_key = my_api_key)[0] - current_model.set_user_identifier(user_name) - chatbot = gr.Chatbot.update(label=MODELS[DEFAULT_MODEL]) - return user_info, user_name, current_model, toggle_like_btn_visibility(DEFAULT_MODEL), *current_model.auto_load(), get_history_names(False, user_name), chatbot - demo.load(create_greeting, inputs=None, outputs=[user_info, user_name, current_model, like_dislike_area, systemPromptTxt, chatbot, historyFileSelectDropdown, chatbot], api_name="load") - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - current_model, - user_question, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, status_display], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=billing_info, inputs=[current_model], outputs=[usageTxt], show_progress=False - ) - - load_history_from_file_args = dict( - fn=load_chat_history, - inputs=[current_model, historyFileSelectDropdown, user_name], - outputs=[saveFileName, systemPromptTxt, chatbot] - ) - - refresh_history_args = dict( - fn=get_history_names, inputs=[gr.State(False), user_name], outputs=[historyFileSelectDropdown] - ) - - - # Chatbot - cancelBtn.click(interrupt, [current_model], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args, api_name="predict").then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - index_files.change(handle_file_upload, [current_model, index_files, chatbot, language_select_dropdown], [index_files, chatbot, status_display]) - summarize_btn.click(handle_summarize_index, [current_model, index_files, chatbot, language_select_dropdown], [chatbot, status_display]) - - emptyBtn.click( - reset, - inputs=[current_model], - outputs=[chatbot, status_display], - show_progress=True, - _js='clearChatbot', - ) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - current_model, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - [chatbot, status_display], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [current_model], - [status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [current_model, chatbot], - [chatbot, status_display], - show_progress=False - ) - - likeBtn.click( - like, - [current_model], - [status_display], - show_progress=False - ) - - dislikeBtn.click( - dislike, - [current_model], - [status_display], - show_progress=False - ) - - two_column.change(update_doc_config, [two_column], None) - - # LLM Models - keyTxt.change(set_key, [current_model, keyTxt], [user_api_key, status_display], api_name="set_key").then(**get_usage_args) - keyTxt.submit(**get_usage_args) - single_turn_checkbox.change(set_single_turn, [current_model, single_turn_checkbox], None) - model_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt, user_name], [current_model, status_display, chatbot, lora_select_dropdown, user_api_key, keyTxt], show_progress=True, api_name="get_model") - model_select_dropdown.change(toggle_like_btn_visibility, [model_select_dropdown], [like_dislike_area], show_progress=False) - lora_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt, user_name], [current_model, status_display, chatbot], show_progress=True) - - # Template - systemPromptTxt.change(set_system_prompt, [current_model, systemPromptTxt], None) - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(**refresh_history_args) - historyDeleteBtn.click(delete_chat_history, [current_model, historyFileSelectDropdown, user_name], [status_display, historyFileSelectDropdown, chatbot], _js='(a,b,c)=>{return showConfirmationDialog(a, b, c);}') - historyFileSelectDropdown.change(**load_history_from_file_args) - downloadFile.change(upload_chat_history, [current_model, downloadFile, user_name], [saveFileName, systemPromptTxt, chatbot]) - - # Train - dataset_selection.upload(handle_dataset_selection, dataset_selection, [dataset_preview_json, upload_to_openai_btn, openai_train_status]) - dataset_selection.clear(handle_dataset_clear, [], [dataset_preview_json, upload_to_openai_btn]) - upload_to_openai_btn.click(upload_to_openai, [dataset_selection], [openai_ft_file_id, openai_train_status], show_progress=True) - - openai_ft_file_id.change(lambda x: gr.update(interactive=True) if len(x) > 0 else gr.update(interactive=False), [openai_ft_file_id], [openai_start_train_btn]) - openai_start_train_btn.click(start_training, [openai_ft_file_id, openai_ft_suffix, openai_train_epoch_slider], [openai_train_status]) - - openai_status_refresh_btn.click(get_training_status, [], [openai_train_status, add_to_models_btn]) - add_to_models_btn.click(add_to_models, [], [model_select_dropdown, openai_train_status], show_progress=True) - openai_cancel_all_jobs_btn.click(cancel_all_jobs, [], [openai_train_status], show_progress=True) - - # Advanced - max_context_length_slider.change(set_token_upper_limit, [current_model, max_context_length_slider], None) - temperature_slider.change(set_temperature, [current_model, temperature_slider], None) - top_p_slider.change(set_top_p, [current_model, top_p_slider], None) - n_choices_slider.change(set_n_choices, [current_model, n_choices_slider], None) - stop_sequence_txt.change(set_stop_sequence, [current_model, stop_sequence_txt], None) - max_generation_slider.change(set_max_tokens, [current_model, max_generation_slider], None) - presence_penalty_slider.change(set_presence_penalty, [current_model, presence_penalty_slider], None) - frequency_penalty_slider.change(set_frequency_penalty, [current_model, frequency_penalty_slider], None) - logit_bias_txt.change(set_logit_bias, [current_model, logit_bias_txt], None) - user_identifier_txt.change(set_user_identifier, [current_model, user_identifier_txt], None) - - default_btn.click( - reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True - ) - # changeAPIURLBtn.click( - # change_api_host, - # [apihostTxt], - # [status_display], - # show_progress=True, - # ) - # changeProxyBtn.click( - # change_proxy, - # [proxyTxt], - # [status_display], - # show_progress=True, - # ) - checkUpdateBtn.click(fn=None, _js='manualCheckUpdate') - - # Invisible elements - updateChuanhuBtn.click( - update_chuanhu, - [], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = i18n("川虎Chat 🚀") - -if __name__ == "__main__": - reload_javascript() - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - blocked_paths=["config.json"], - favicon_path="./web_assets/favicon.ico", - ) diff --git a/spaces/JustinLin610/ImageBind_zeroshot_demo/app.py b/spaces/JustinLin610/ImageBind_zeroshot_demo/app.py deleted file mode 100644 index 71493ec124d06e5c13ba107e22dd04b49893be50..0000000000000000000000000000000000000000 --- a/spaces/JustinLin610/ImageBind_zeroshot_demo/app.py +++ /dev/null @@ -1,142 +0,0 @@ -import data -import torch -import gradio as gr -from models import imagebind_model -from models.imagebind_model import ModalityType - - -device = "cuda:0" if torch.cuda.is_available() else "cpu" -model = imagebind_model.imagebind_huge(pretrained=True) -model.eval() -model.to(device) - - -def image_text_zeroshot(image, text_list): - image_paths = [image] - labels = [label.strip(" ") for label in text_list.strip(" ").split("|")] - inputs = { - ModalityType.TEXT: data.load_and_transform_text(labels, device), - ModalityType.VISION: data.load_and_transform_vision_data(image_paths, device), - } - - with torch.no_grad(): - embeddings = model(inputs) - - scores = ( - torch.softmax( - embeddings[ModalityType.VISION] @ embeddings[ModalityType.TEXT].T, dim=-1 - ) - .squeeze(0) - .tolist() - ) - - score_dict = {label: score for label, score in zip(labels, scores)} - - return score_dict - - -def audio_text_zeroshot(audio, text_list): - audio_paths = [audio] - labels = [label.strip(" ") for label in text_list.strip(" ").split("|")] - inputs = { - ModalityType.TEXT: data.load_and_transform_text(labels, device), - ModalityType.AUDIO: data.load_and_transform_audio_data(audio_paths, device), - } - - with torch.no_grad(): - embeddings = model(inputs) - - scores = ( - torch.softmax( - embeddings[ModalityType.AUDIO] @ embeddings[ModalityType.TEXT].T, dim=-1 - ) - .squeeze(0) - .tolist() - ) - - score_dict = {label: score for label, score in zip(labels, scores)} - - return score_dict - - -def video_text_zeroshot(video, text_list): - video_paths = [video] - labels = [label.strip(" ") for label in text_list.strip(" ").split("|")] - inputs = { - ModalityType.TEXT: data.load_and_transform_text(labels, device), - ModalityType.VISION: data.load_and_transform_video_data(video_paths, device), - } - - with torch.no_grad(): - embeddings = model(inputs) - - scores = ( - torch.softmax( - embeddings[ModalityType.VISION] @ embeddings[ModalityType.TEXT].T, dim=-1 - ) - .squeeze(0) - .tolist() - ) - - score_dict = {label: score for label, score in zip(labels, scores)} - - return score_dict - - -def inference( - task, - text_list=None, - image=None, - audio=None, - video=None, -): - if task == "image-text": - result = image_text_zeroshot(image, text_list) - elif task == "audio-text": - result = audio_text_zeroshot(audio, text_list) - elif task == "video-text": - result = video_text_zeroshot(video, text_list) - else: - raise NotImplementedError - return result - - -def main(): - inputs = [ - gr.inputs.Radio( - choices=[ - "image-text", - "audio-text", - "video-text", - ], - type="value", - default="image-text", - label="Task", - ), - gr.inputs.Textbox(lines=1, label="Candidate texts"), - gr.inputs.Image(type="filepath", label="Input image"), - gr.inputs.Audio(type="filepath", label="Input audio"), - gr.inputs.Video(type=None, label="Input video"), - ] - - iface = gr.Interface( - inference, - inputs, - "label", - examples=[ - ["image-text", "A dog|A car|A bird", "assets/dog_image.jpg", None, None], - ["image-text", "A dog|A car|A bird", "assets/car_image.jpg", None, None], - ["audio-text", "A dog|A car|A bird", None, "assets/bird_audio.wav", None], - ["video-text", "A dog|A car|A bird", None, None, "assets/dog_video.mp4"], - ], - description="""

              This is a simple demo of ImageBind for zero-shot cross-modal understanding (now including image classification, audio classification, and video classification). Please refer to the original paper and repo for more details.
              - To test your own cases, you can upload an image, an audio or a video, and provide the candidate texts separated by "|".
              - You can duplicate this space and run it privately: Duplicate Space

              """, - title="ImageBind: Zero-shot Cross-modal Understanding", - ) - - iface.launch() - - -if __name__ == "__main__": - main() diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/models/occ_gmm.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/models/occ_gmm.py deleted file mode 100644 index 5caf72b50eea184fa0926feb1fe4655f451434ef..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/models/occ_gmm.py +++ /dev/null @@ -1,304 +0,0 @@ -from ..options import Options -from . import models_utils, transformer -from .. import constants -from ..custom_types import * -from torch import distributions -import math -from ..utils import files_utils - -def dot(x, y, dim=3): - return torch.sum(x * y, dim=dim) - - -def remove_projection(v_1, v_2): - proj = (dot(v_1, v_2) / dot(v_2, v_2)) - return v_1 - proj[:, :, :, None] * v_2 - - -def get_p_direct(splitted: TS) -> T: - raw_base = [] - for i in range(constants.DIM): - u = splitted[i] - for j in range(i): - u = remove_projection(u, raw_base[j]) - raw_base.append(u) - p = torch.stack(raw_base, dim=3) - p = p / torch.norm(p, p=2, dim=4)[:, :, :, :, None] # + self.noise[None, None, :, :] - return p - - -def split_gm(splitted: TS) -> TS: - p = get_p_direct(splitted) - # eigenvalues - eigen = splitted[-3] ** 2 + constants.EPSILON - mu = splitted[-2] - phi = splitted[-1].squeeze(3) - return mu, p, phi, eigen - - -class DecompositionNetwork(nn.Module): - - def forward_bottom(self, x): - return self.l1(x).view(-1, self.bottom_width, self.embed_dim) - - def forward_upper(self, x): - return self.to_zb(x) - - def forward(self, x): - x = self.forward_bottom(x) - x = self.forward_upper(x) - return x - - def __init__(self, opt: Options, act=nnf.relu, norm_layer: nn.Module = nn.LayerNorm): - super(DecompositionNetwork, self).__init__() - self.bottom_width = opt.num_gaussians - self.embed_dim = opt.dim_h - self.l1 = nn.Linear(opt.dim_z, self.bottom_width * opt.dim_h) - if opt.decomposition_network == 'mlp': - - self.to_zb = models_utils.MLP((opt.dim_h, *([2 * opt.dim_h] * opt.decomposition_num_layers), opt.dim_h)) - else: - self.to_zb = transformer.Transformer(opt.dim_h, opt.num_heads, opt.num_layers, act=act, - norm_layer=norm_layer) - - -class OccupancyMlP(nn.Module): - ## base on DeepSDF https://github.com/facebookresearch/DeepSDF - def forward(self, x, z): - x_ = x = torch.cat((x, z), dim=-1) - for i, layer in enumerate(self.layers): - if layer == self.latent_in: - x = torch.cat([x, x_], 2) - x = layer(x) - if i < len(self.layers) - 2: - x = self.relu(x) - # x = self.dropout(self.relu(x)) - # files_utils.save_pickle(x.detach().cpu(), f"/home/amirh/projects/spaghetti_private/assets/debug/out_{i}") - return x - - def __init__(self, opt: Options): - super(OccupancyMlP, self).__init__() - dim_in = 2 * (opt.pos_dim + constants.DIM) - dims = [dim_in] + opt.head_occ_size * [dim_in] + [1] - self.latent_in = opt.head_occ_size // 2 + opt.head_occ_size % 2 - dims[self.latent_in] += dims[0] - self.dropout = nn.Dropout(.2) - self.relu = nn.ReLU(True) - layers = [] - for i in range(0, len(dims) - 1): - layers.append(nn.utils.weight_norm(nn.Linear(dims[i], dims[i + 1]))) - self.layers = nn.ModuleList(layers) - - -class OccupancyNetwork(nn.Module): - - def get_pos(self, coords: T): - pos = self.pos_encoder(coords) - pos = torch.cat((coords, pos), dim=2) - return pos - - def forward_attention(self, coords: T, zh: T, mask: Optional[T] = None, alpha: TN = None) -> TS: - pos = self.get_pos(coords) - _, attn = self.occ_transformer.forward_with_attention(pos, zh, mask, alpha) - return attn - - def forward(self, coords: T, zh: T, mask: TN = None, alpha: TN = None) -> T: - pos = self.get_pos(coords) - x = self.occ_transformer(pos, zh, mask, alpha) - out = self.occ_mlp(pos, x) - if out.shape[-1] == 1: - out = out.squeeze(-1) - return out - - def __init__(self, opt: Options): - super(OccupancyNetwork, self).__init__() - self.pos_encoder = models_utils.SineLayer(constants.DIM, opt.pos_dim, is_first=True) - - if hasattr(opt, 'head_occ_type') and opt.head_occ_type == 'skip': - self.occ_mlp = OccupancyMlP(opt) - else: - self.occ_mlp = models_utils.MLP([(opt.pos_dim + constants.DIM)] + - [opt.dim_h] * opt.head_occ_size + [1]) - self.occ_transformer = transformer.Transformer(opt.pos_dim + constants.DIM, - opt.num_heads_head, opt.num_layers_head, - dim_ref=opt.dim_h) - -class DecompositionControl(models_utils.Model): - - def forward_bottom(self, x): - z_bottom = self.decomposition.forward_bottom(x) - return z_bottom - - def forward_upper(self, x): - x = self.decomposition.forward_upper(x) - return x - - def forward_split(self, x: T) -> Tuple[T, TS]: - b = x.shape[0] - raw_gmm = self.to_gmm(x).unsqueeze(1) - gmms = split_gm(torch.split(raw_gmm, self.split_shape, dim=3)) - zh = self.to_s(x) - zh = zh.view(b, -1, zh.shape[-1]) - return zh, gmms - - @staticmethod - def apply_gmm_affine(gmms: TS, affine: T): - mu, p, phi, eigen = gmms - if affine.dim() == 2: - affine = affine.unsqueeze(0).expand(mu.shape[0], *affine.shape) - mu_r = torch.einsum('bad, bpnd->bpna', affine, mu) - p_r = torch.einsum('bad, bpncd->bpnca', affine, p) - return mu_r, p_r, phi, eigen - - @staticmethod - def concat_gmm(gmm_a: TS, gmm_b: TS): - out = [] - num_gaussians = gmm_a[0].shape[2] // 2 - for element_a, element_b in zip(gmm_a, gmm_b): - out.append(torch.cat((element_a[:, :, :num_gaussians], element_b[:, :, :num_gaussians]), dim=2)) - return out - - def forward_mid(self, zs) -> Tuple[T, TS]: - zh, gmms = self.forward_split(zs) - if self.reflect is not None: - gmms_r = self.apply_gmm_affine(gmms, self.reflect) - gmms = self.concat_gmm(gmms, gmms_r) - return zh, gmms - - def forward_low(self, z_init): - zs = self.decomposition(z_init) - return zs - - def forward(self, z_init) -> Tuple[T, TS]: - zs = self.forward_low(z_init) - zh, gmms = self.forward_mid(zs) - return zh, gmms - - @staticmethod - def get_reflection(reflect_axes: Tuple[bool, ...]): - reflect = torch.eye(constants.DIM) - for i in range(constants.DIM): - if reflect_axes[i]: - reflect[i, i] = -1 - return reflect - - def __init__(self, opt: Options): - super(DecompositionControl, self).__init__() - if sum(opt.symmetric) > 0: - reflect = self.get_reflection(opt.symmetric) - self.register_buffer("reflect", reflect) - else: - self.reflect = None - self.split_shape = tuple((constants.DIM + 2) * [constants.DIM] + [1]) - self.decomposition = DecompositionNetwork(opt) - self.to_gmm = nn.Linear(opt.dim_h, sum(self.split_shape)) - self.to_s = nn.Linear(opt.dim_h, opt.dim_h) - - -class Spaghetti(models_utils.Model): - - def get_z(self, item: T): - return self.z(item) - - @staticmethod - def interpolate_(z, num_between: Optional[int] = None): - if num_between is None: - num_between = z.shape[0] - alphas = torch.linspace(0, 1, num_between, device=z.device) - while alphas.dim() != z.dim(): - alphas.unsqueeze_(-1) - z_between = alphas * z[1:2] + (- alphas + 1) * z[:1] - return z_between - - def interpolate_higher(self, z: T, num_between: Optional[int] = None): - z_between = self.interpolate_(z, num_between) - zh, gmms = self.decomposition_control.forward_split(self.decomposition_control.forward_upper(z_between)) - return zh, gmms - - def interpolate(self, item_a: int, item_b: int, num_between: int): - items = torch.tensor((item_a, item_b), dtype=torch.int64, device=self.device) - z = self.get_z(items) - z_between = self.interpolate_(z, num_between) - zh, gmms = self.decomposition_control(z_between) - return zh, gmms - - def get_disentanglement(self, items: T): - z_a = self.get_z(items) - z_b = self.decomposition_control.forward_bottom(z_a) - zh, gmms = self.decomposition_control.forward_split(self.decomposition_control.forward_upper(z_b)) - return z_a, z_b, zh, gmms - - def get_embeddings(self, item: T): - z = self.get_z(item) - zh, gmms = self.decomposition_control(z) - return zh, z, gmms - - def merge_zh_step_a(self, zh, gmms): - b, gp, g, _ = gmms[0].shape - mu, p, phi, eigen = [item.view(b, gp * g, *item.shape[3:]) for item in gmms] - p = p.reshape(*p.shape[:2], -1) - z_gmm = torch.cat((mu, p, phi.unsqueeze(-1), eigen), dim=2).detach() - z_gmm = self.from_gmm(z_gmm) - zh_ = zh + z_gmm - return zh_ - - def merge_zh(self, zh, gmms, mask: Optional[T] = None) -> TNS: - zh_ = self.merge_zh_step_a(zh, gmms) - zh_, attn = self.mixing_network.forward_with_attention(zh_, mask=mask) - return zh_, attn - - def forward_b(self, x, zh, gmms, mask: Optional[T] = None) -> T: - zh, _ = self.merge_zh(zh, gmms, mask) - return self.occupancy_network(x, zh, mask) - - def forward_a(self, item: T): - zh, z, gmms = self.get_embeddings(item) - return zh, z, gmms - - def get_attention(self, x, item) -> TS: - zh, z, gmms = self.forward_a(item) - zh, _ = self.merge_zh(zh, gmms) - return self.occupancy_network.forward_attention(x, zh) - - def forward(self, x, item: T) -> Tuple[T, T, TS, T]: - zh, z, gmms = self.forward_a(item) - return self.forward_b(x, zh, gmms), z, gmms, zh - - def forward_mid(self, x: T, zh: T) -> Tuple[T, TS]: - zh, gmms = self.decomposition_control.forward_mid(zh) - return self.forward_b(x, zh, gmms), gmms - - def get_random_embeddings(self, num_items: int): - if self.dist is None: - weights = self.z.weight.clone().detach() - mean = weights.mean(0) - weights = weights - mean[None, :] - cov = torch.einsum('nd,nc->dc', weights, weights) / (weights.shape[0] - 1) - self.dist = distributions.multivariate_normal.MultivariateNormal(mean, covariance_matrix=cov) - z_init = self.dist.sample((num_items,)) - return z_init - - def random_samples(self, num_items: int): - z_init = self.get_random_embeddings(num_items) - zh, gmms = self.decomposition_control(z_init) - return zh, gmms - - def __init__(self, opt: Options): - super(Spaghetti, self).__init__() - self.device = opt.device - self.opt = opt - self.z = nn.Embedding(opt.dataset_size, opt.dim_z) - torch.nn.init.normal_( - self.z.weight.data, - 0.0, - 1. / math.sqrt(opt.dim_z), - ) - self.decomposition_control = DecompositionControl(opt) - self.occupancy_network = OccupancyNetwork(opt) - self.from_gmm = nn.Linear(sum(self.decomposition_control.split_shape), opt.dim_h) - if opt.use_encoder: - self.mixing_network = transformer.Transformer(opt.dim_h, opt.num_heads, opt.num_layers, - act=nnf.relu, norm_layer=nn.LayerNorm) - else: - self.mixing_network = transformer.DummyTransformer() - self.dist = None diff --git a/spaces/Kabriske/Multilingual_Video_Subtitler/app.py b/spaces/Kabriske/Multilingual_Video_Subtitler/app.py deleted file mode 100644 index d172027b852a25453fd2ee6508f7a22ffd6ef502..0000000000000000000000000000000000000000 --- a/spaces/Kabriske/Multilingual_Video_Subtitler/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import os -from main import LANGS, Pipeline - - -def video_identity(video, source_language="English", target_language="Spanish"): - pipeline = Pipeline() - video_path = pipeline(video, "sample", source_language, target_language) - - return video_path - - -demo = gr.Interface(video_identity, - inputs=[gr.Video(), - gr.components.Dropdown(label="Source Language", choices=LANGS), - gr.components.Dropdown(label="Target Language", choices=LANGS), - ], - outputs="playable_video", - examples=[[ - os.path.join(os.path.dirname(__file__), - "sample/iPhone_14_Pro.mp4"), "English"]], - cache_examples=True, - title="Video Subtitler Demo 🍿🍿🍿", - description="This demo is a proof of concept for a video subtitler. " - ) - -pipeline = Pipeline() -demo.queue(max_size=15).launch(show_error=True) \ No newline at end of file diff --git a/spaces/Kairi7865/Kairi2/README.md b/spaces/Kairi7865/Kairi2/README.md deleted file mode 100644 index 5ec5d3b3c5571d976fe44272e279635f52ad5b26..0000000000000000000000000000000000000000 --- a/spaces/Kairi7865/Kairi2/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Kairi -emoji: 👀 -colorFrom: indigo -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kaludi/Food-Category-Classification_App/README.md b/spaces/Kaludi/Food-Category-Classification_App/README.md deleted file mode 100644 index 2b6a8a0639e22f1429ac789783d34a4863671e30..0000000000000000000000000000000000000000 --- a/spaces/Kaludi/Food-Category-Classification_App/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Food Category Classification App -emoji: 🍔 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kangarroar/ApplioRVC-Inference/tools/infer/infer-pm-index256.py b/spaces/Kangarroar/ApplioRVC-Inference/tools/infer/infer-pm-index256.py deleted file mode 100644 index da5430421f1de17a57379aefbe7919dd555b2f50..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/tools/infer/infer-pm-index256.py +++ /dev/null @@ -1,202 +0,0 @@ -""" - -对源特征进行检索 -""" -import os -import logging - -logger = logging.getLogger(__name__) - -import parselmouth -import torch - -os.environ["CUDA_VISIBLE_DEVICES"] = "0" -# import torchcrepe -from time import time as ttime - -# import pyworld -import librosa -import numpy as np -import soundfile as sf -import torch.nn.functional as F -from fairseq import checkpoint_utils - -# from models import SynthesizerTrn256#hifigan_nonsf -# from lib.infer_pack.models import SynthesizerTrn256NSF as SynthesizerTrn256#hifigan_nsf -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid as SynthesizerTrn256, -) # hifigan_nsf -from scipy.io import wavfile - -# from lib.infer_pack.models import SynthesizerTrnMs256NSFsid_sim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsimFlow as SynthesizerTrn256#hifigan_nsf - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model_path = r"E:\codes\py39\vits_vc_gpu_train\assets\hubert\hubert_base.pt" # -logger.info("Load model(s) from {}".format(model_path)) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -model = model.half() -model.eval() - -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],183,256,is_half=True)#hifigan#512#256 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],109,256,is_half=True)#hifigan#512#256 -net_g = SynthesizerTrn256( - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 183, - 256, - is_half=True, -) # hifigan#512#256#no_dropout -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,3,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],0)#ts3 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2],512,[16,16,4],0)#hifigan-ps-sr -# -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [5,5], 512, [15,15], 0)#ms -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,10], 512, [16,16], 0)#idwt2 - -# weights=torch.load("infer/ft-mi_1k-noD.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder-flow-enc_q_1k.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder_true_1k.pt") -# weights=torch.load("infer/ft-mi-sim1k.pt") -weights = torch.load("infer/ft-mi-no_opt-no_dropout.pt") -logger.debug(net_g.load_state_dict(weights, strict=True)) - -net_g.eval().to(device) -net_g.half() - - -def get_f0(x, p_len, f0_up_key=0): - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = ( - parselmouth.Sound(x, 16000) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0 *= pow(2, f0_up_key / 12) - f0bak = f0.copy() - - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - # f0_mel[f0_mel > 188] = 188 - f0_coarse = np.rint(f0_mel).astype(np.int32) - return f0_coarse, f0bak - - -import faiss - -index = faiss.read_index("infer/added_IVF512_Flat_mi_baseline_src_feat.index") -big_npy = np.load("infer/big_src_feature_mi.npy") -ta0 = ta1 = ta2 = 0 -for idx, name in enumerate( - [ - "冬之花clip1.wav", - ] -): ## - wav_path = "todo-songs/%s" % name # - f0_up_key = -2 # - audio, sampling_rate = sf.read(wav_path) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - - feats = torch.from_numpy(audio).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 - } - if torch.cuda.is_available(): - torch.cuda.synchronize() - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - ####索引优化 - npy = feats[0].cpu().numpy().astype("float32") - D, I = index.search(npy, 1) - feats = ( - torch.from_numpy(big_npy[I.squeeze()].astype("float16")).unsqueeze(0).to(device) - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if torch.cuda.is_available(): - torch.cuda.synchronize() - t1 = ttime() - # p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存 - p_len = min(feats.shape[1], 10000) # - pitch, pitchf = get_f0(audio, p_len, f0_up_key) - p_len = min(feats.shape[1], 10000, pitch.shape[0]) # 太大了爆显存 - if torch.cuda.is_available(): - torch.cuda.synchronize() - t2 = ttime() - feats = feats[:, :p_len, :] - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - p_len = torch.LongTensor([p_len]).to(device) - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - sid = torch.LongTensor([0]).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - with torch.no_grad(): - audio = ( - net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - .numpy() - ) # nsf - if torch.cuda.is_available(): - torch.cuda.synchronize() - t3 = ttime() - ta0 += t1 - t0 - ta1 += t2 - t1 - ta2 += t3 - t2 - # wavfile.write("ft-mi_1k-index256-noD-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-freeze-vocoder-flow-enc_q_1k-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-sim1k-%s.wav"%name, 40000, audio)## - wavfile.write("ft-mi-no_opt-no_dropout-%s.wav" % name, 40000, audio) ## - - -logger.debug("%.2fs %.2fs %.2fs", ta0, ta1, ta2) # diff --git a/spaces/KenjieDec/GPEN/__init_paths.py b/spaces/KenjieDec/GPEN/__init_paths.py deleted file mode 100644 index cd0eb1f793f7a6237099d75cb66496f5d32a693c..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/GPEN/__init_paths.py +++ /dev/null @@ -1,21 +0,0 @@ -''' -@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021) -@author: yangxy (yangtao9009@gmail.com) -''' -import os.path as osp -import sys - -def add_path(path): - if path not in sys.path: - sys.path.insert(0, path) - -this_dir = osp.dirname(__file__) - -path = osp.join(this_dir, 'retinaface') -add_path(path) - -path = osp.join(this_dir, 'sr_model') -add_path(path) - -path = osp.join(this_dir, 'face_model') -add_path(path) diff --git a/spaces/KenjieDec/GPEN/face_model/op/fused_act.py b/spaces/KenjieDec/GPEN/face_model/op/fused_act.py deleted file mode 100644 index a2099ca2ab5f7dd574c15543930364b4ab817ef8..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/GPEN/face_model/op/fused_act.py +++ /dev/null @@ -1,96 +0,0 @@ -import os -import platform - -import torch -from torch import nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load, _import_module_from_library - -# if running GPEN without cuda, please comment line 11-19 -if platform.system() == 'Linux' and torch.cuda.is_available(): - module_path = os.path.dirname(__file__) - fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], - ) - - -#fused = _import_module_from_library('fused', '/tmp/torch_extensions/fused', True) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5, device='cpu'): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - self.device = device - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale, self.device) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5, device='cpu'): - if platform.system() == 'Linux' and torch.cuda.is_available() and device != 'cpu': - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) - else: - return scale * F.leaky_relu(input + bias.view((1, -1)+(1,)*(len(input.shape)-2)), negative_slope=negative_slope) \ No newline at end of file diff --git a/spaces/Kurkur99/Sentiment_analysis/sentiment_labeling.py b/spaces/Kurkur99/Sentiment_analysis/sentiment_labeling.py deleted file mode 100644 index 045bc4c518f586a889544cea2662010f174c8dee..0000000000000000000000000000000000000000 --- a/spaces/Kurkur99/Sentiment_analysis/sentiment_labeling.py +++ /dev/null @@ -1,20 +0,0 @@ -import pandas as pd - -def label_sentiment(rating): - """Label sentiment based on the rating.""" - if rating in [1, 2]: - return 'negative' - elif rating == 3: - return 'neutral' - elif rating in [4, 5]: - return 'positive' - else: - return 'unknown' - -def add_sentiment_column(data: pd.DataFrame, rating_col_name='rating'): - """Add a sentiment column to the dataframe based on the ratings.""" - if rating_col_name not in data.columns: - raise ValueError(f"Column '{rating_col_name}' not found in the dataframe.") - - data['sentiment'] = data[rating_col_name].apply(label_sentiment) - return data diff --git a/spaces/Lamai/LAMAIGPT/autogpt/agent/agent.py b/spaces/Lamai/LAMAIGPT/autogpt/agent/agent.py deleted file mode 100644 index ee7885f8844022597321fa6b492430ec34c0d6b9..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/agent/agent.py +++ /dev/null @@ -1,197 +0,0 @@ -from colorama import Fore, Style - -from autogpt.app import execute_command, get_command -from autogpt.chat import chat_with_ai, create_chat_message -from autogpt.config import Config -from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques -from autogpt.json_utils.utilities import validate_json -from autogpt.logs import logger, print_assistant_thoughts -from autogpt.speech import say_text -from autogpt.spinner import Spinner -from autogpt.utils import clean_input - - -class Agent: - """Agent class for interacting with Auto-GPT. - - Attributes: - ai_name: The name of the agent. - memory: The memory object to use. - full_message_history: The full message history. - next_action_count: The number of actions to execute. - system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully. - Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals. - - triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is: - Determine which next command to use, and respond using the format specified above: - The triggering prompt is not part of the system prompt because between the system prompt and the triggering - prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve. - SYSTEM PROMPT - CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant) - TRIGGERING PROMPT - - The triggering prompt reminds the AI about its short term meta task (defining the next task) - """ - - def __init__( - self, - ai_name, - memory, - full_message_history, - next_action_count, - system_prompt, - triggering_prompt, - ): - self.ai_name = ai_name - self.memory = memory - self.full_message_history = full_message_history - self.next_action_count = next_action_count - self.system_prompt = system_prompt - self.triggering_prompt = triggering_prompt - - def start_interaction_loop(self): - # Interaction Loop - cfg = Config() - loop_count = 0 - command_name = None - arguments = None - user_input = "" - - while True: - # Discontinue if continuous limit is reached - loop_count += 1 - if ( - cfg.continuous_mode - and cfg.continuous_limit > 0 - and loop_count > cfg.continuous_limit - ): - logger.typewriter_log( - "Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}" - ) - break - - # Send message to AI, get response - with Spinner("Thinking... "): - assistant_reply = chat_with_ai( - self.system_prompt, - self.triggering_prompt, - self.full_message_history, - self.memory, - cfg.fast_token_limit, - ) # TODO: This hardcodes the model to use GPT3.5. Make this an argument - - assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply) - - # Print Assistant thoughts - if assistant_reply_json != {}: - validate_json(assistant_reply_json, "llm_response_format_1") - # Get command name and arguments - try: - print_assistant_thoughts(self.ai_name, assistant_reply_json) - command_name, arguments = get_command(assistant_reply_json) - # command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"] - if cfg.speak_mode: - say_text(f"I want to execute {command_name}") - except Exception as e: - logger.error("Error: \n", str(e)) - - if not cfg.continuous_mode and self.next_action_count == 0: - ### GET USER AUTHORIZATION TO EXECUTE COMMAND ### - # Get key press: Prompt the user to press enter to continue or escape - # to exit - logger.typewriter_log( - "NEXT ACTION: ", - Fore.CYAN, - f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} " - f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}", - ) - print( - "Enter 'y' to authorise command, 'y -N' to run N continuous " - "commands, 'n' to exit program, or enter feedback for " - f"{self.ai_name}...", - flush=True, - ) - while True: - console_input = clean_input( - Fore.MAGENTA + "Input:" + Style.RESET_ALL - ) - if console_input.lower().strip() == "y": - user_input = "GENERATE NEXT COMMAND JSON" - break - elif console_input.lower().strip() == "": - print("Invalid input format.") - continue - elif console_input.lower().startswith("y -"): - try: - self.next_action_count = abs( - int(console_input.split(" ")[1]) - ) - user_input = "GENERATE NEXT COMMAND JSON" - except ValueError: - print( - "Invalid input format. Please enter 'y -n' where n is" - " the number of continuous tasks." - ) - continue - break - elif console_input.lower() == "n": - user_input = "EXIT" - break - else: - user_input = console_input - command_name = "human_feedback" - break - - if user_input == "GENERATE NEXT COMMAND JSON": - logger.typewriter_log( - "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=", - Fore.MAGENTA, - "", - ) - elif user_input == "EXIT": - print("Exiting...", flush=True) - break - else: - # Print command - logger.typewriter_log( - "NEXT ACTION: ", - Fore.CYAN, - f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}" - f" ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}", - ) - - # Execute command - if command_name is not None and command_name.lower().startswith("error"): - result = ( - f"Command {command_name} threw the following error: {arguments}" - ) - elif command_name == "human_feedback": - result = f"Human feedback: {user_input}" - else: - result = ( - f"Command {command_name} returned: " - f"{execute_command(command_name, arguments)}" - ) - if self.next_action_count > 0: - self.next_action_count -= 1 - - memory_to_add = ( - f"Assistant Reply: {assistant_reply} " - f"\nResult: {result} " - f"\nHuman Feedback: {user_input} " - ) - - self.memory.add(memory_to_add) - - # Check if there's a result from the command append it to the message - # history - if result is not None: - self.full_message_history.append(create_chat_message("system", result)) - logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result) - else: - self.full_message_history.append( - create_chat_message("system", "Unable to execute command") - ) - logger.typewriter_log( - "SYSTEM: ", Fore.YELLOW, "Unable to execute command" - ) diff --git a/spaces/LanguageBind/LanguageBind/open_clip/coca_model.py b/spaces/LanguageBind/LanguageBind/open_clip/coca_model.py deleted file mode 100644 index 039453af70d1c865dd7cc6016f732aff2f7dc3d2..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/open_clip/coca_model.py +++ /dev/null @@ -1,458 +0,0 @@ -from typing import Optional - -import torch -from torch import nn -from torch.nn import functional as F -import numpy as np -from dataclasses import dataclass - -from .transformer import ( - LayerNormFp32, - LayerNorm, - QuickGELU, - MultimodalTransformer, -) -from .model import CLIPTextCfg, CLIPVisionCfg, _build_vision_tower, _build_text_tower - -try: - from transformers import ( - BeamSearchScorer, - LogitsProcessorList, - TopPLogitsWarper, - TopKLogitsWarper, - RepetitionPenaltyLogitsProcessor, - MinLengthLogitsProcessor, - MaxLengthCriteria, - StoppingCriteriaList - ) - - GENERATION_TYPES = { - "top_k": TopKLogitsWarper, - "top_p": TopPLogitsWarper, - "beam_search": "beam_search" - } - _has_transformers = True -except ImportError as e: - GENERATION_TYPES = { - "top_k": None, - "top_p": None, - "beam_search": "beam_search" - } - _has_transformers = False - - -@dataclass -class MultimodalCfg(CLIPTextCfg): - mlp_ratio: int = 4 - dim_head: int = 64 - heads: int = 8 - n_queries: int = 256 - attn_pooler_heads: int = 8 - - -def _build_text_decoder_tower( - embed_dim, - multimodal_cfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, -): - multimodal_cfg = MultimodalCfg(**multimodal_cfg) if isinstance(multimodal_cfg, dict) else multimodal_cfg - act_layer = QuickGELU if quick_gelu else nn.GELU - norm_layer = ( - LayerNormFp32 if cast_dtype in (torch.float16, torch.bfloat16) else LayerNorm - ) - - decoder = MultimodalTransformer( - context_length=multimodal_cfg.context_length, - width=multimodal_cfg.width, - heads=multimodal_cfg.heads, - layers=multimodal_cfg.layers, - ls_init_value=multimodal_cfg.ls_init_value, - output_dim=embed_dim, - act_layer=act_layer, - norm_layer=norm_layer, - ) - - return decoder - - -class CoCa(nn.Module): - def __init__( - self, - embed_dim, - multimodal_cfg: MultimodalCfg, - text_cfg: CLIPTextCfg, - vision_cfg: CLIPVisionCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, - pad_id: int = 0, - ): - super().__init__() - multimodal_cfg = MultimodalCfg(**multimodal_cfg) if isinstance(multimodal_cfg, dict) else multimodal_cfg - text_cfg = CLIPTextCfg(**text_cfg) if isinstance(text_cfg, dict) else text_cfg - vision_cfg = CLIPVisionCfg(**vision_cfg) if isinstance(vision_cfg, dict) else vision_cfg - - self.text = _build_text_tower( - embed_dim=embed_dim, - text_cfg=text_cfg, - quick_gelu=quick_gelu, - cast_dtype=cast_dtype, - ) - - vocab_size = ( - text_cfg.vocab_size # for hf models - if hasattr(text_cfg, "hf_model_name") and text_cfg.hf_model_name is not None - else text_cfg.vocab_size - ) - - self.visual = _build_vision_tower( - embed_dim=embed_dim, - vision_cfg=vision_cfg, - quick_gelu=quick_gelu, - cast_dtype=cast_dtype, - ) - - self.text_decoder = _build_text_decoder_tower( - vocab_size, - multimodal_cfg=multimodal_cfg, - quick_gelu=quick_gelu, - cast_dtype=cast_dtype, - ) - - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - self.pad_id = pad_id - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.visual.set_grad_checkpointing(enable) - self.text.set_grad_checkpointing(enable) - self.text_decoder.set_grad_checkpointing(enable) - - def _encode_image(self, images, normalize=True): - image_latent, tokens_embs = self.visual(images) - image_latent = F.normalize(image_latent, dim=-1) if normalize else image_latent - return image_latent, tokens_embs - - def _encode_text(self, text, normalize=True, embed_cls=True): - text = text[:, :-1] if embed_cls else text # make space for CLS token - text_latent, token_emb = self.text(text) - text_latent = F.normalize(text_latent, dim=-1) if normalize else text_latent - return text_latent, token_emb - - def encode_image(self, images, normalize=True): - image_latent, _ = self._encode_image(images, normalize=normalize) - return image_latent - - def encode_text(self, text, normalize=True, embed_cls=True): - text_latent, _ = self._encode_text(text, normalize=normalize, embed_cls=embed_cls) - return text_latent - - def forward(self, image, text, embed_cls=True, image_latent=None, image_embs=None): - text_latent, token_embs = self._encode_text(text, embed_cls=embed_cls) - if image_latent is None or image_embs is None: - image_latent, image_embs = self._encode_image(image) - - # TODO: add assertion to avoid bugs? - labels = text[:, -token_embs.shape[1]:] - - logits = self.text_decoder(image_embs, token_embs) - return { - "image_features": image_latent, - "text_features": text_latent, - "logits": logits, - "labels": labels, - "logit_scale": self.logit_scale.exp() - } - - def generate( - self, - image, - text=None, - seq_len=30, - max_seq_len=77, - temperature=1., - generation_type="beam_search", - top_p=0.1, # keep tokens in the 1 - top_p quantile - top_k=1, # keeps the top_k most probable tokens - pad_token_id=None, - eos_token_id=None, - sot_token_id=None, - num_beams=6, - num_beam_groups=3, - min_seq_len=5, - stopping_criteria=None, - repetition_penalty=1.0, - fixed_output_length=False # if True output.shape == (batch_size, seq_len) - ): - # taking many ideas and components from HuggingFace GenerationMixin - # https://huggingface.co/docs/transformers/main/en/main_classes/text_generation - assert _has_transformers, "Please install transformers for generate functionality. `pip install transformers`." - assert seq_len > min_seq_len, "seq_len must be larger than min_seq_len" - - with torch.no_grad(): - sot_token_id = 49406 if sot_token_id is None else sot_token_id - eos_token_id = 49407 if eos_token_id is None else eos_token_id - pad_token_id = self.pad_id if pad_token_id is None else pad_token_id - logit_processor = LogitsProcessorList( - [ - MinLengthLogitsProcessor(min_seq_len, eos_token_id), - RepetitionPenaltyLogitsProcessor(repetition_penalty), - ] - ) - - if stopping_criteria is None: - stopping_criteria = [MaxLengthCriteria(max_length=seq_len)] - - stopping_criteria = StoppingCriteriaList( - stopping_criteria - ) - - device = image.device - - if generation_type == "beam_search": - output = self._generate_beamsearch( - image_inputs = image, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - sot_token_id=sot_token_id, - num_beams=num_beams, - num_beam_groups=num_beam_groups, - min_seq_len=min_seq_len, - stopping_criteria=stopping_criteria, - logit_processor=logit_processor, - ) - if fixed_output_length and output.shape[1] < seq_len: - return torch.cat( - (output, torch.ones(output.shape[0], seq_len-output.shape[1], device=device, dtype=output.dtype) * self.pad_id), - dim=1 - ) - return output - - elif generation_type == "top_p": - logit_warper = GENERATION_TYPES[generation_type](top_p) - elif generation_type == "top_k": - logit_warper = GENERATION_TYPES[generation_type](top_k) - else: - raise ValueError( - f"generation_type has to be one of " - f"{'| ' + ' | '.join(list(GENERATION_TYPES.keys())) + ' |'}." - ) - - image_latent, image_embs = self._encode_image(image) - - if text is None: - text = torch.ones((image.shape[0], 1), device=device, dtype=torch.long) * sot_token_id - - was_training = self.training - num_dims = len(text.shape) - - if num_dims == 1: - text = text[None, :] - - cur_len = text.shape[1] - self.eval() - out = text - - while True: - x = out[:, -max_seq_len:] - cur_len = x.shape[1] - logits = self(image, x, image_latent=image_latent, image_embs=image_embs, embed_cls=False)["logits"][:, -1] - mask = (out[:, -1] == eos_token_id) | (out[:, -1] == pad_token_id) - sample = torch.ones((out.shape[0], 1), device=device, dtype=torch.long) * pad_token_id - - if mask.all(): - if not fixed_output_length: - break - else: - logits = logits[~mask, :] - filtered_logits = logit_processor(x[~mask, :], logits) - filtered_logits = logit_warper(x[~mask, :], filtered_logits) - probs = F.softmax(filtered_logits / temperature, dim=-1) - - if (cur_len + 1 == seq_len): - sample[~mask, :] = torch.ones((sum(~mask), 1), device=device, dtype=torch.long) * eos_token_id - else: - sample[~mask, :] = torch.multinomial(probs, 1) - - out = torch.cat((out, sample), dim=-1) - - cur_len += 1 - - if stopping_criteria(out, None): - break - - if num_dims == 1: - out = out.squeeze(0) - - self.train(was_training) - return out - - def _generate_beamsearch( - self, - image_inputs, - pad_token_id=None, - eos_token_id=None, - sot_token_id=None, - num_beams=6, - num_beam_groups=3, - min_seq_len=5, - stopping_criteria=None, - logit_processor=None, - logit_warper=None, - ): - device = image_inputs.device - batch_size = image_inputs.shape[0] - image_inputs = torch.repeat_interleave(image_inputs, num_beams, dim=0) - image_latent, image_embs = self._encode_image(image_inputs) - - input_ids = torch.ones((batch_size * num_beams, 1), device=device, dtype=torch.long) - input_ids = input_ids * sot_token_id - beam_scorer = BeamSearchScorer( - batch_size=batch_size, - num_beams=num_beams, - device=device, - num_beam_groups=num_beam_groups, - ) - # instantiate logits processors - logits_processor = ( - LogitsProcessorList([MinLengthLogitsProcessor(min_seq_len, eos_token_id=eos_token_id)]) - if logit_processor is None - else logit_processor - ) - - batch_size = len(beam_scorer._beam_hyps) - num_beams = beam_scorer.num_beams - num_beam_groups = beam_scorer.num_beam_groups - num_sub_beams = num_beams // num_beam_groups - batch_beam_size, cur_len = input_ids.shape - beam_indices = None - - if num_beams * batch_size != batch_beam_size: - raise ValueError( - f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}." - ) - - beam_scores = torch.full((batch_size, num_beams), -1e9, dtype=torch.float, device=device) - # initialise score of first beam of each group with 0 and the rest with 1e-9. This ensures that the beams in - # the same group don't produce same tokens everytime. - beam_scores[:, ::num_sub_beams] = 0 - beam_scores = beam_scores.view((batch_size * num_beams,)) - - while True: - - # predicted tokens in cur_len step - current_tokens = torch.zeros(batch_size * num_beams, dtype=input_ids.dtype, device=device) - - # indices which will form the beams in the next time step - reordering_indices = torch.zeros(batch_size * num_beams, dtype=torch.long, device=device) - - # do one decoder step on all beams of all sentences in batch - model_inputs = prepare_inputs_for_generation(input_ids=input_ids, image_inputs=image_inputs) - outputs = self( - model_inputs['images'], - model_inputs['text'], - embed_cls=False, - image_latent=image_latent, - image_embs=image_embs - ) - - for beam_group_idx in range(num_beam_groups): - group_start_idx = beam_group_idx * num_sub_beams - group_end_idx = min(group_start_idx + num_sub_beams, num_beams) - group_size = group_end_idx - group_start_idx - - # indices of beams of current group among all sentences in batch - batch_group_indices = [] - - for batch_idx in range(batch_size): - batch_group_indices.extend( - [batch_idx * num_beams + idx for idx in range(group_start_idx, group_end_idx)] - ) - group_input_ids = input_ids[batch_group_indices] - - # select outputs of beams of currentg group only - next_token_logits = outputs['logits'][batch_group_indices, -1, :] - vocab_size = next_token_logits.shape[-1] - - next_token_scores_processed = logits_processor( - group_input_ids, next_token_logits, current_tokens=current_tokens, beam_group_idx=beam_group_idx - ) - next_token_scores = next_token_scores_processed + beam_scores[batch_group_indices].unsqueeze(-1) - next_token_scores = next_token_scores.expand_as(next_token_scores_processed) - - # reshape for beam search - next_token_scores = next_token_scores.view(batch_size, group_size * vocab_size) - - next_token_scores, next_tokens = torch.topk( - next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True - ) - - next_indices = torch.div(next_tokens, vocab_size, rounding_mode="floor") - next_tokens = next_tokens % vocab_size - - # stateless - process_beam_indices = sum(beam_indices, ()) if beam_indices is not None else None - beam_outputs = beam_scorer.process( - group_input_ids, - next_token_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - beam_indices=process_beam_indices, - ) - beam_scores[batch_group_indices] = beam_outputs["next_beam_scores"] - beam_next_tokens = beam_outputs["next_beam_tokens"] - beam_idx = beam_outputs["next_beam_indices"] - - input_ids[batch_group_indices] = group_input_ids[beam_idx] - group_input_ids = torch.cat([group_input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) - current_tokens[batch_group_indices] = group_input_ids[:, -1] - - # (beam_idx // group_size) -> batch_idx - # (beam_idx % group_size) -> offset of idx inside the group - reordering_indices[batch_group_indices] = ( - num_beams * torch.div(beam_idx, group_size, rounding_mode="floor") + group_start_idx + (beam_idx % group_size) - ) - - input_ids = torch.cat([input_ids, current_tokens.unsqueeze(-1)], dim=-1) - - # increase cur_len - cur_len = cur_len + 1 - if beam_scorer.is_done or stopping_criteria(input_ids, None): - break - - final_beam_indices = sum(beam_indices, ()) if beam_indices is not None else None - sequence_outputs = beam_scorer.finalize( - input_ids, - beam_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - max_length=stopping_criteria.max_length, - beam_indices=final_beam_indices, - ) - return sequence_outputs['sequences'] - - -def prepare_inputs_for_generation(input_ids, image_inputs, past=None, **kwargs): - if past: - input_ids = input_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - else: - position_ids = None - return { - "text": input_ids, - "images": image_inputs, - "past_key_values": past, - "position_ids": position_ids, - "attention_mask": attention_mask, - } diff --git a/spaces/LuxOAI/ChatGpt-Web/app/components/markdown.tsx b/spaces/LuxOAI/ChatGpt-Web/app/components/markdown.tsx deleted file mode 100644 index 47e7a4d3c22164e6be4f469ca6a6283d9635ba02..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/components/markdown.tsx +++ /dev/null @@ -1,178 +0,0 @@ -import ReactMarkdown from "react-markdown"; -import "katex/dist/katex.min.css"; -import RemarkMath from "remark-math"; -import RemarkBreaks from "remark-breaks"; -import RehypeKatex from "rehype-katex"; -import RemarkGfm from "remark-gfm"; -import RehypeHighlight from "rehype-highlight"; -import mermaid from "mermaid"; -import { useRef, useState, RefObject, useEffect } from "react"; -import { copyToClipboard } from "../utils"; - -import LoadingIcon from "../icons/three-dots.svg"; -import React from "react"; - -export function Mermaid(props: { code: string; onError: () => void }) { - const ref = useRef(null); - - useEffect(() => { - if (props.code && ref.current) { - mermaid - .run({ - nodes: [ref.current], - }) - .catch((e) => { - props.onError(); - console.error("[Mermaid] ", e.message); - }); - } - }, [props.code]); - - function viewSvgInNewWindow() { - const svg = ref.current?.querySelector("svg"); - if (!svg) return; - const text = new XMLSerializer().serializeToString(svg); - const blob = new Blob([text], { type: "image/svg+xml" }); - const url = URL.createObjectURL(blob); - const win = window.open(url); - if (win) { - win.onload = () => URL.revokeObjectURL(url); - } - } - - return ( -
              viewSvgInNewWindow()} - > - {props.code} -
              - ); -} - -export function PreCode(props: { children: any }) { - const ref = useRef(null); - const [mermaidCode, setMermaidCode] = useState(""); - - useEffect(() => { - if (!ref.current) return; - const mermaidDom = ref.current.querySelector("code.language-mermaid"); - if (mermaidDom) { - setMermaidCode((mermaidDom as HTMLElement).innerText); - } - }, [props.children]); - - if (mermaidCode) { - return setMermaidCode("")} />; - } - return ( -
              -       {
              -          if (ref.current) {
              -            const code = ref.current.innerText;
              -            copyToClipboard(code);
              -          }
              -        }}
              -      >
              -      {props.children}
              -    
              - ); -} - -function _MarkDownContent(props: { content: string }) { - return ( - { - const href = aProps.href || ""; - const isInternal = /^\/#/i.test(href); - const target = isInternal ? "_self" : aProps.target ?? "_blank"; - return ; - }, - }} - > - {props.content} - - ); -} - -export const MarkdownContent = React.memo(_MarkDownContent); - -export function Markdown( - props: { - content: string; - loading?: boolean; - fontSize?: number; - parentRef: RefObject; - defaultShow?: boolean; - } & React.DOMAttributes, -) { - const mdRef = useRef(null); - const renderedHeight = useRef(0); - const inView = useRef(!!props.defaultShow); - - const parent = props.parentRef.current; - const md = mdRef.current; - - const checkInView = () => { - if (parent && md) { - const parentBounds = parent.getBoundingClientRect(); - const twoScreenHeight = Math.max(500, parentBounds.height * 2); - const mdBounds = md.getBoundingClientRect(); - const parentTop = parentBounds.top - twoScreenHeight; - const parentBottom = parentBounds.bottom + twoScreenHeight; - const isOverlap = - Math.max(parentTop, mdBounds.top) <= - Math.min(parentBottom, mdBounds.bottom); - inView.current = isOverlap; - } - - if (inView.current && md) { - renderedHeight.current = Math.max( - renderedHeight.current, - md.getBoundingClientRect().height, - ); - } - }; - - setTimeout(() => checkInView(), 1); - - return ( -
              0 - ? renderedHeight.current - : "auto", - }} - ref={mdRef} - onContextMenu={props.onContextMenu} - onDoubleClickCapture={props.onDoubleClickCapture} - > - {inView.current && - (props.loading ? ( - - ) : ( - - ))} -
              - ); -} diff --git a/spaces/Mahiruoshi/BangDream-Bert-VITS2/preprocess_text.py b/spaces/Mahiruoshi/BangDream-Bert-VITS2/preprocess_text.py deleted file mode 100644 index 223b70653e5e5c8da94d65ae45325737f1fc83cb..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/BangDream-Bert-VITS2/preprocess_text.py +++ /dev/null @@ -1,120 +0,0 @@ -import json -import os.path -from collections import defaultdict -from random import shuffle -from typing import Optional - -from tqdm import tqdm -import click -from text.cleaner import clean_text - - -@click.command() -@click.option( - "--transcription-path", - default="filelists/genshin.list", - type=click.Path(exists=True, file_okay=True, dir_okay=False), -) -@click.option("--cleaned-path", default=None) -@click.option("--train-path", default="filelists/train.list") -@click.option("--val-path", default="filelists/val.list") -@click.option( - "--config-path", - default="configs/config.json", - type=click.Path(exists=True, file_okay=True, dir_okay=False), -) -@click.option("--val-per-spk", default=4) -@click.option("--max-val-total", default=8) -@click.option("--clean/--no-clean", default=True) -def main( - transcription_path: str, - cleaned_path: Optional[str], - train_path: str, - val_path: str, - config_path: str, - val_per_spk: int, - max_val_total: int, - clean: bool, -): - if cleaned_path is None: - cleaned_path = transcription_path + ".cleaned" - - if clean: - out_file = open(cleaned_path, "w", encoding="utf-8") - for line in tqdm(open(transcription_path, encoding="utf-8").readlines()): - try: - utt, spk, language, text = line.strip().split("|") - norm_text, phones, tones, word2ph = clean_text(text, language) - out_file.write( - "{}|{}|{}|{}|{}|{}|{}\n".format( - utt, - spk, - language, - norm_text, - " ".join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]), - ) - ) - except Exception as error: - print("err!", line, error) - - out_file.close() - - transcription_path = cleaned_path - - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open(transcription_path, encoding="utf-8") as f: - audioPaths = set() - countSame = 0 - countNotFound = 0 - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split("|") - if utt in audioPaths: - # 过滤数据集错误:相同的音频匹配多个文本,导致后续bert出问题 - print(f"重复音频文本:{line}") - countSame += 1 - continue - if not os.path.isfile(utt): - print(f"没有找到对应的音频:{utt}") - countNotFound += 1 - continue - audioPaths.add(utt) - spk_utt_map[spk].append(line) - - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - print(f"总重复音频数:{countSame},总未找到的音频数:{countNotFound}") - - train_list = [] - val_list = [] - - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list += utts[:val_per_spk] - train_list += utts[val_per_spk:] - - if len(val_list) > max_val_total: - train_list += val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open(train_path, "w", encoding="utf-8") as f: - for line in train_list: - f.write(line) - - with open(val_path, "w", encoding="utf-8") as f: - for line in val_list: - f.write(line) - - config = json.load(open(config_path, encoding="utf-8")) - config["data"]["spk2id"] = spk_id_map - with open(config_path, "w", encoding="utf-8") as f: - json.dump(config, f, indent=2, ensure_ascii=False) - - -if __name__ == "__main__": - main() diff --git a/spaces/MajinSonic/EarthnDusk-EpicMix6_Realism/README.md b/spaces/MajinSonic/EarthnDusk-EpicMix6_Realism/README.md deleted file mode 100644 index e881d2a2d2285f2aa86897b727da942a1b2a881c..0000000000000000000000000000000000000000 --- a/spaces/MajinSonic/EarthnDusk-EpicMix6_Realism/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: EarthnDusk-EpicMix6 Realism -emoji: 🐠 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/__init__.py deleted file mode 100644 index 2af819d61d589cfec2e0ca46612a7456f42b831a..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -from .groundingdino import build_groundingdino diff --git a/spaces/Mashhoor/stabilityai-stable-diffusion-image-generator/app.py b/spaces/Mashhoor/stabilityai-stable-diffusion-image-generator/app.py deleted file mode 100644 index 9520517f687cf7229ddfab9d8c5f8af7f76b0bd4..0000000000000000000000000000000000000000 --- a/spaces/Mashhoor/stabilityai-stable-diffusion-image-generator/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-xl-base-1.0").launch() \ No newline at end of file diff --git a/spaces/MaximilianChen/Casper/app.py b/spaces/MaximilianChen/Casper/app.py deleted file mode 100644 index 7e4bac2f1d4a57f71675bc4590f20437892f6197..0000000000000000000000000000000000000000 --- a/spaces/MaximilianChen/Casper/app.py +++ /dev/null @@ -1,32 +0,0 @@ -from transformers import pipeline -import gradio as gr -import torch - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -asr = pipeline( - "automatic-speech-recognition", - model="MaximilianChen/Casper", - chunk_length_s=30, - device=device, -) - -def transcribe_audio(mic=None, file=None): - if mic is not None: - audio = mic - elif file is not None: - audio = file - else: - return "You must either provide a mic recording or a file" - transcription = asr(audio)["text"] - return transcription - - -gr.Interface( - fn=transcribe_audio, - inputs=[ - gr.Audio(source="microphone", type="filepath", optional=True), - gr.Audio(source="upload", type="filepath", optional=True), - ], - outputs="text", -).launch() \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/serving.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/serving.py deleted file mode 100644 index 895f61dc37adf40d93ea347817abbb18966e157e..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/serving.py +++ /dev/null @@ -1,134 +0,0 @@ -# Lint as: python3 -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Examples of SavedModel export for tf-serving.""" - -from absl import app -from absl import flags -import tensorflow as tf - -from official.nlp.bert import bert_models -from official.nlp.bert import configs - -flags.DEFINE_integer("sequence_length", None, - "Sequence length to parse the tf.Example. If " - "sequence_length > 0, add a signature for serialized " - "tf.Example and define the parsing specification by the " - "sequence_length.") -flags.DEFINE_string("bert_config_file", None, - "Bert configuration file to define core bert layers.") -flags.DEFINE_string("model_checkpoint_path", None, - "File path to TF model checkpoint.") -flags.DEFINE_string("export_path", None, - "Destination folder to export the serving SavedModel.") - -FLAGS = flags.FLAGS - - -class BertServing(tf.keras.Model): - """Bert transformer encoder model for serving.""" - - def __init__(self, bert_config, name_to_features=None, name="serving_model"): - super(BertServing, self).__init__(name=name) - self.encoder = bert_models.get_transformer_encoder( - bert_config, sequence_length=None) - self.name_to_features = name_to_features - - def call(self, inputs): - input_word_ids = inputs["input_ids"] - input_mask = inputs["input_mask"] - input_type_ids = inputs["segment_ids"] - - encoder_outputs, _ = self.encoder( - [input_word_ids, input_mask, input_type_ids]) - return encoder_outputs - - def serve_body(self, input_ids, input_mask=None, segment_ids=None): - if segment_ids is None: - # Requires CLS token is the first token of inputs. - segment_ids = tf.zeros_like(input_ids) - if input_mask is None: - # The mask has 1 for real tokens and 0 for padding tokens. - input_mask = tf.where( - tf.equal(input_ids, 0), tf.zeros_like(input_ids), - tf.ones_like(input_ids)) - - inputs = dict( - input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids) - return self.call(inputs) - - @tf.function - def serve(self, input_ids, input_mask=None, segment_ids=None): - outputs = self.serve_body(input_ids, input_mask, segment_ids) - # Returns a dictionary to control SignatureDef output signature. - return {"outputs": outputs[-1]} - - @tf.function - def serve_examples(self, inputs): - features = tf.io.parse_example(inputs, self.name_to_features) - for key in list(features.keys()): - t = features[key] - if t.dtype == tf.int64: - t = tf.cast(t, tf.int32) - features[key] = t - return self.serve( - features["input_ids"], - input_mask=features["input_mask"] if "input_mask" in features else None, - segment_ids=features["segment_ids"] - if "segment_ids" in features else None) - - @classmethod - def export(cls, model, export_dir): - if not isinstance(model, cls): - raise ValueError("Invalid model instance: %s, it should be a %s" % - (model, cls)) - - signatures = { - "serving_default": - model.serve.get_concrete_function( - input_ids=tf.TensorSpec( - shape=[None, None], dtype=tf.int32, name="inputs")), - } - if model.name_to_features: - signatures[ - "serving_examples"] = model.serve_examples.get_concrete_function( - tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")) - tf.saved_model.save(model, export_dir=export_dir, signatures=signatures) - - -def main(_): - sequence_length = FLAGS.sequence_length - if sequence_length is not None and sequence_length > 0: - name_to_features = { - "input_ids": tf.io.FixedLenFeature([sequence_length], tf.int64), - "input_mask": tf.io.FixedLenFeature([sequence_length], tf.int64), - "segment_ids": tf.io.FixedLenFeature([sequence_length], tf.int64), - } - else: - name_to_features = None - bert_config = configs.BertConfig.from_json_file(FLAGS.bert_config_file) - serving_model = BertServing( - bert_config=bert_config, name_to_features=name_to_features) - checkpoint = tf.train.Checkpoint(model=serving_model.encoder) - checkpoint.restore(FLAGS.model_checkpoint_path - ).assert_existing_objects_matched().run_restore_ops() - BertServing.export(serving_model, FLAGS.export_path) - - -if __name__ == "__main__": - flags.mark_flag_as_required("bert_config_file") - flags.mark_flag_as_required("model_checkpoint_path") - flags.mark_flag_as_required("export_path") - app.run(main) diff --git a/spaces/NoCrypt/mikuTTS/rmvpe.py b/spaces/NoCrypt/mikuTTS/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/mikuTTS/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/Nymbo/OpenAI_TTS_Streaming_Whisperv3/README.md b/spaces/Nymbo/OpenAI_TTS_Streaming_Whisperv3/README.md deleted file mode 100644 index 437dc7cf4027a977d6438221e6271144e5b4155b..0000000000000000000000000000000000000000 --- a/spaces/Nymbo/OpenAI_TTS_Streaming_Whisperv3/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OpenAI TTS Streaming -emoji: 📊 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/cross_lingual_language_model/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/cross_lingual_language_model/README.md deleted file mode 100644 index af9128e39e5925e9411d162c2f24a19e4532d618..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/cross_lingual_language_model/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# Cross-Lingual Language Model Pre-training - -Below are some details for training Cross-Lingual Language Models (XLM) - similar to the ones presented in [Lample & Conneau, 2019](https://arxiv.org/pdf/1901.07291.pdf) - in Fairseq. The current implementation only supports the Masked Language Model (MLM) from the paper above. - -## Downloading and Tokenizing Monolingual Data - -Pointers to the monolingual data from wikipedia, used for training the XLM-style MLM model as well as details on processing (tokenization and BPE) it can be found in the [XLM Github Repository](https://github.com/facebookresearch/XLM#download--preprocess-monolingual-data). - -Let's assume the following for the code snippets in later sections to work -- Processed data is in the folder: monolingual_data/processed -- Each language has 3 files for train, test and validation. For example we have the following files for English: - train.en, valid.en -- We are training a model for 5 languages: Arabic (ar), German (de), English (en), Hindi (hi) and French (fr) -- The vocabulary file is monolingual_data/processed/vocab_mlm - - -## Fairseq Pre-processing and Binarization - -Pre-process and binarize the data with the MaskedLMDictionary and cross_lingual_lm task - -```bash -# Ensure the output directory exists -DATA_DIR=monolingual_data/fairseq_processed -mkdir -p "$DATA_DIR" - -for lg in ar de en hi fr -do - - fairseq-preprocess \ - --task cross_lingual_lm \ - --srcdict monolingual_data/processed/vocab_mlm \ - --only-source \ - --trainpref monolingual_data/processed/train \ - --validpref monolingual_data/processed/valid \ - --testpref monolingual_data/processed/test \ - --destdir monolingual_data/fairseq_processed \ - --workers 20 \ - --source-lang $lg - - # Since we only have a source language, the output file has a None for the - # target language. Remove this - - for stage in train test valid - - sudo mv "$DATA_DIR/$stage.$lg-None.$lg.bin" "$stage.$lg.bin" - sudo mv "$DATA_DIR/$stage.$lg-None.$lg.idx" "$stage.$lg.idx" - - done - -done -``` - -## Train a Cross-lingual Language Model similar to the XLM MLM model - -Use the following command to train the model on 5 languages. - -``` -fairseq-train \ ---task cross_lingual_lm monolingual_data/fairseq_processed \ ---save-dir checkpoints/mlm \ ---max-update 2400000 --save-interval 1 --no-epoch-checkpoints \ ---arch xlm_base \ ---optimizer adam --lr-scheduler reduce_lr_on_plateau \ ---lr-shrink 0.5 --lr 0.0001 --stop-min-lr 1e-09 \ ---dropout 0.1 \ ---criterion legacy_masked_lm_loss \ ---max-tokens 2048 --tokens-per-sample 256 --attention-dropout 0.1 \ ---dataset-impl lazy --seed 0 \ ---masked-lm-only \ ---monolingual-langs 'ar,de,en,hi,fr' --num-segment 5 \ ---ddp-backend=legacy_ddp -``` - -Some Notes: -- Using tokens_per_sample greater than 256 can cause OOM (out-of-memory) issues. Usually since MLM packs in streams of text, this parameter doesn't need much tuning. -- The Evaluation workflow for computing MLM Perplexity on test data is in progress. -- Finetuning this model on a downstream task is something which is not currently available. diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/megatron_11b/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/megatron_11b/README.md deleted file mode 100644 index 945c96c91e2e2d93466abc28d90bc25a1e7dd471..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/megatron_11b/README.md +++ /dev/null @@ -1,161 +0,0 @@ -# Megatron-11b - -Megatron-11b is a unidirectional language model with `11B` parameters based on [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf). Following the original Megatron work, we trained the model using intra-layer model parallelism with each layer's parameters split across 8 GPUs. - -Megatron-11b is trained on the same data and uses the same byte-pair encoding (BPE) as [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf). - -## Pre-trained models - -Model | Description | # params | # filesize | Download ----|---|---|---|--- -`megatron_11b` | megatron_11b unidirectional language model | 11B | 19Gb | [megatron_11b.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz) - -#### Architecture: - -Param | Value ----|--- -embed_dim | 3072 -ffn_dim | 3072 * 6 -layers | 72 -attention heads | 32 - -#### Training details: - -Param | value ----|--- -bsz | 512 -num_updates | 300,000 -peak_lr | 1.5e-04 -lr scheduler | inverse_sqrt -clip norm | 0.0 - - -## Example training command (model parallel) - -Megatron-11b contains too many parameters to train on a single GPU. Following -the original Megatron work, we adopt an intra-layer model parallel training -approach in which each layer's parameters are split across multiple GPUs and -activations and gradients are communicated during the forward/backward pass, -respectively. We similarly split the loss computation using the -`vocab_parallel_cross_entropy` criterion. - -The following training command illustrates how to do model parallel training in -fairseq. We assume that each machine (node) has 8 GPUs among which to split the -model parameters (`--model-parallel-size 8`). If you have access to multiple -nodes, you may combine this with data parallel training by increasing -`--distributed-world-size`. - -To train Megatron-11b on a single node: - - -```bash -fairseq-train \ - --distributed-world-size 8 \ - --memory-efficient-fp16 \ - --num-workers 2 \ - --model-parallel-size 8 \ - --criterion vocab_parallel_cross_entropy \ - --task language_modeling \ - --sample-break-mode none \ - --tokens-per-sample 1024 \ - --arch transformer_lm_megatron_11b \ - --share-decoder-input-output-embed \ - --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-08 --clip-norm 0.0 \ - --lr-scheduler inverse_sqrt --lr 0.00015 \ - --warmup-updates 3000 --weight-decay 0.01 \ - --dropout 0.1 --attention-dropout 0.1 \ - --batch-size 2 \ - --max-update 300000; -``` - -Note: Above was tested on `DGX-1` box, with `8xV100-32Gb` GPUs. - -## Results - -**[Wikitext103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)** - -Model | Valid perplexity | Test perplexity ----|---|--- -`megatron_11b` | 10.64 | 10.54 - - -## Evaluating `megatron_11b` on Wikitext-103 - -#### 1. Downloading Megatron-11b -```bash -# WARNING: this file is 19GB -wget https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz -tar -xzvf megatron_11b.tar.gz -``` - -#### 2. Download Wikitext-103 -```bash -wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip -unzip wikitext-103-raw-v1.zip -``` - -#### 3. Detokenize test tokens -Megatron-11b uses a byte-level BPE that expects raw (untokenized) input. Since -the wikitext-103 dataset comes tokenized, we apply a simple detokenization -process to restore the untokenized test set: - -```bash -python -m examples.megatron_11b.detok wikitext-103-raw/wiki.test.raw > wikitext-103-raw/wiki.test.detok -``` - -#### 4. BPE encoding -```bash -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' - -python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "wikitext-103-raw/wiki.test.detok" \ - --outputs "wikitext-103-raw/wiki.test.bpe" \ - --workers 60; -``` - -#### 5. Fairseq binarize -```bash -fairseq-preprocess \ - --only-source \ - --testpref wikitext-103-raw/wiki.test.bpe \ - --srcdict megatron_11b/dict.txt \ - --destdir wikitext103-bin; -``` - -#### 6. Evaluating perplexity. -We can now evaluate perplexity on the test set. Note that because we've modified -the test set (via detokenization and BPE), the perplexity reported by -`fairseq-eval-lm` needs to be renormalized. - -Compute unnormalized perplexity: - -```bash -DATA_PATH=wikitext103-bin/ -fairseq-eval-lm \ - $DATA_PATH \ - --path megatron_11b/model.pt \ - --task language_modeling \ - --gen-subset test \ - --batch-size 8 \ - --criterion cross_entropy \ - --context-window 992 \ - --distributed-world-size 8 \ - --model-parallel-size 8; -# Expected PPL (unnormalized_ppl): [8.46] -# Note: the eval command needs to run on 8 GPUs for the released model -``` -Renormalizing formula: `2 ^ ( log_2(unnormalized_PPL) * (270847 / 245566))`. -PPL After normalization: `10.54` - -To renormalize the perplexity, we must account for the change in token count -after detokenizing and appling BPE. The formula for this is: -`2 ^ ( log_2(unnormalized_PPL) * (new_token_cnt / orig_token_cnt))` - -For the wikitext-103 test set, the original token count is `245566` and the -token count after detokenization and applying BPE is `270847`. - -The perplexity after renormalization is: -`2 ^ ( log_2(8.46) * (270847 / 245566)) = 10.54` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_km_label.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_km_label.py deleted file mode 100644 index 8871307804d3f1e5c7cc49061614c69df26ab1ee..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_km_label.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import numpy as np - -import joblib -import torch -import tqdm - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_km_label") - - -class ApplyKmeans(object): - def __init__(self, km_path): - self.km_model = joblib.load(km_path) - self.C_np = self.km_model.cluster_centers_.transpose() - self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True) - - self.C = torch.from_numpy(self.C_np) - self.Cnorm = torch.from_numpy(self.Cnorm_np) - if torch.cuda.is_available(): - self.C = self.C.cuda() - self.Cnorm = self.Cnorm.cuda() - - def __call__(self, x): - if isinstance(x, torch.Tensor): - dist = ( - x.pow(2).sum(1, keepdim=True) - - 2 * torch.matmul(x, self.C) - + self.Cnorm - ) - return dist.argmin(dim=1).cpu().numpy() - else: - dist = ( - (x ** 2).sum(1, keepdims=True) - - 2 * np.matmul(x, self.C_np) - + self.Cnorm_np - ) - return np.argmin(dist, axis=1) - - -def get_feat_iterator(feat_dir, split, nshard, rank): - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - with open(leng_path, "r") as f: - lengs = [int(line.rstrip()) for line in f] - offsets = [0] + np.cumsum(lengs[:-1]).tolist() - - def iterate(): - feat = np.load(feat_path, mmap_mode="r") - assert feat.shape[0] == (offsets[-1] + lengs[-1]) - for offset, leng in zip(offsets, lengs): - yield feat[offset: offset + leng] - - return iterate, len(lengs) - - -def dump_label(feat_dir, split, km_path, nshard, rank, lab_dir): - apply_kmeans = ApplyKmeans(km_path) - generator, num = get_feat_iterator(feat_dir, split, nshard, rank) - iterator = generator() - - lab_path = f"{lab_dir}/{split}_{rank}_{nshard}.km" - os.makedirs(lab_dir, exist_ok=True) - with open(lab_path, "w") as f: - for feat in tqdm.tqdm(iterator, total=num): - # feat = torch.from_numpy(feat).cuda() - lab = apply_kmeans(feat).tolist() - f.write(" ".join(map(str, lab)) + "\n") - logger.info("finished successfully") - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("km_path") - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("lab_dir") - args = parser.parse_args() - logging.info(str(args)) - - dump_label(**vars(args)) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py deleted file mode 100644 index 5a04851a74624e9c8ebc259805b7aed6c638b0de..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - - -def _normalize_spaces(line): - return " ".join(line.split()) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--input_file", required=True, type=str) - parser.add_argument("-n", "--repeat_times", required=True, type=int) - parser.add_argument("-o", "--output_file", required=False, type=str) - args = parser.parse_args() - stream = open(args.output_file, "w") if args.output_file else sys.stdout - - for line in open(args.input_file): - for _ in range(args.repeat_times): - stream.write(_normalize_spaces(line) + "\n") - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/clib/libbase/balanced_assignment.cpp b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/clib/libbase/balanced_assignment.cpp deleted file mode 100644 index 1a5a1061f3892be5a17e49192f744c39e0d395e8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/clib/libbase/balanced_assignment.cpp +++ /dev/null @@ -1,109 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -/* -C++ code for solving the linear assignment problem. -Based on the Auction Algorithm from -https://dspace.mit.edu/bitstream/handle/1721.1/3265/P-2108-26912652.pdf and the -implementation from: https://github.com/bkj/auction-lap Adapted to be more -efficient when each worker is looking for k jobs instead of 1. -*/ -#include -#include -using namespace torch::indexing; -torch::Tensor balanced_assignment(torch::Tensor job_and_worker_to_score) { - int max_iterations = 100; - torch::Tensor epsilon = - (job_and_worker_to_score.max() - job_and_worker_to_score.min()) / 50; - epsilon.clamp_min_(1e-04); - torch::Tensor worker_and_job_to_score = - job_and_worker_to_score.detach().transpose(0, 1).contiguous(); - int num_workers = worker_and_job_to_score.size(0); - int num_jobs = worker_and_job_to_score.size(1); - auto device = worker_and_job_to_score.device(); - int jobs_per_worker = num_jobs / num_workers; - torch::Tensor value = worker_and_job_to_score.clone(); - int counter = 0; - torch::Tensor max_value = worker_and_job_to_score.max(); - - torch::Tensor bid_indices; - torch::Tensor cost = worker_and_job_to_score.new_zeros({1, num_jobs}); - torch::Tensor bids = - worker_and_job_to_score.new_empty({num_workers, num_jobs}); - torch::Tensor bid_increments = - worker_and_job_to_score.new_empty({num_workers, jobs_per_worker}); - torch::Tensor top_values = - worker_and_job_to_score.new_empty({num_workers, jobs_per_worker + 1}); - torch::Tensor high_bids = worker_and_job_to_score.new_empty({num_jobs}); - - torch::Tensor top_index = top_values.to(torch::kLong); - torch::Tensor high_bidders = top_index.new_empty({num_jobs}); - torch::Tensor have_bids = high_bidders.to(torch::kBool); - torch::Tensor jobs_indices = - torch::arange({num_jobs}, torch::dtype(torch::kLong).device(device)); - torch::Tensor true_tensor = - torch::ones({1}, torch::dtype(torch::kBool).device(device)); - - while (true) { - bids.zero_(); - torch::topk_out(top_values, top_index, value, jobs_per_worker + 1, 1); - - // Each worker bids the difference in value between that job and the k+1th - // job - torch::sub_out( - bid_increments, - top_values.index({Slice(None, None), Slice(0, jobs_per_worker)}), - top_values.index({Slice(None, None), jobs_per_worker}).unsqueeze(1)); - - bid_increments.add_(epsilon); - bids.scatter_( - 1, - top_index.index({Slice(None, None), Slice(0, jobs_per_worker)}), - bid_increments); - - if (counter < max_iterations && counter > 0) { - // Put in a minimal bid to retain items from the last round if no-one else - // bids for them this round - bids.view(-1).index_put_({bid_indices}, epsilon); - } - - // Find the highest bidding worker per job - torch::max_out(high_bids, high_bidders, bids, 0); - torch::gt_out(have_bids, high_bids, 0); - - if (have_bids.all().item()) { - // All jobs were bid for - break; - } - - // Make popular items more expensive - cost.add_(high_bids); - torch::sub_out(value, worker_and_job_to_score, cost); - - bid_indices = ((high_bidders * num_jobs) + jobs_indices).index({have_bids}); - - if (counter < max_iterations) { - // Make sure that this item will be in the winning worker's top-k next - // time. - value.view(-1).index_put_({bid_indices}, max_value); - } else { - // Suboptimal approximation that converges quickly from current solution - value.view(-1).index_put_( - {bid_indices}, worker_and_job_to_score.view(-1).index({bid_indices})); - } - - counter += 1; - } - - return top_index.index({Slice(None, None), Slice(0, jobs_per_worker)}) - .reshape(-1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("balanced_assignment", &balanced_assignment, "Balanced Assignment"); -} diff --git a/spaces/OIUGLK/bingo/src/components/chat-header.tsx b/spaces/OIUGLK/bingo/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
              - logo -
              欢迎使用新必应
              -
              由 AI 支持的网页版 Copilot
              -
              - ) -} diff --git a/spaces/ORI-Muchim/NahidaTTS/README.md b/spaces/ORI-Muchim/NahidaTTS/README.md deleted file mode 100644 index fcd7204bf54b78ebc6fb7afd06fd72892891a2ee..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/NahidaTTS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NahidaTTS -emoji: 🍃 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/hooks/subTaskProgressListener.py b/spaces/Olivier-Truong/faster-whisper-webui-v2/src/hooks/subTaskProgressListener.py deleted file mode 100644 index 9a8eaa876fcd18032875d67535e0558494842c60..0000000000000000000000000000000000000000 --- a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/hooks/subTaskProgressListener.py +++ /dev/null @@ -1,37 +0,0 @@ -from src.hooks.progressListener import ProgressListener - -from typing import Union - -class SubTaskProgressListener(ProgressListener): - """ - A sub task listener that reports the progress of a sub task to a base task listener - Parameters - ---------- - base_task_listener : ProgressListener - The base progress listener to accumulate overall progress in. - base_task_total : float - The maximum total progress that will be reported to the base progress listener. - sub_task_start : float - The starting progress of a sub task, in respect to the base progress listener. - sub_task_total : float - The total amount of progress a sub task will report to the base progress listener. - """ - def __init__( - self, - base_task_listener: ProgressListener, - base_task_total: float, - sub_task_start: float, - sub_task_total: float, - ): - self.base_task_listener = base_task_listener - self.base_task_total = base_task_total - self.sub_task_start = sub_task_start - self.sub_task_total = sub_task_total - - def on_progress(self, current: Union[int, float], total: Union[int, float]): - sub_task_progress_frac = current / total - sub_task_progress = self.sub_task_start + self.sub_task_total * sub_task_progress_frac - self.base_task_listener.on_progress(sub_task_progress, self.base_task_total) - - def on_finished(self): - self.base_task_listener.on_progress(self.sub_task_start + self.sub_task_total, self.base_task_total) \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/deform_conv.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/deform_conv.py deleted file mode 100644 index eca070f59645af4c9ccd003d99678f19538f355d..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/deform_conv.py +++ /dev/null @@ -1,501 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from functools import lru_cache -import torch -from torch import nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair -from torchvision.ops import deform_conv2d - -from detectron2 import _C - -from .wrappers import _NewEmptyTensorOp - - -class _DeformConv(Function): - @staticmethod - def forward( - ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - im2col_step=64, - ): - if input is not None and input.dim() != 4: - raise ValueError( - "Expected 4D tensor as input, got {}D tensor instead.".format(input.dim()) - ) - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.im2col_step = im2col_step - - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty( - _DeformConv._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride) - ) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - if not input.is_cuda: - if deformable_groups != 1: - raise NotImplementedError( - "Deformable Conv with deformable_groups != 1 is not supported on CPUs!" - ) - return deform_conv2d( - input, offset, weight, stride=stride, padding=padding, dilation=dilation - ) - else: - cur_im2col_step = _DeformConv._cal_im2col_step(input.shape[0], ctx.im2col_step) - assert (input.shape[0] % cur_im2col_step) == 0, "im2col step must divide batchsize" - - _C.deform_conv_forward( - input, - weight, - offset, - output, - ctx.bufs_[0], - ctx.bufs_[1], - weight.size(3), - weight.size(2), - ctx.stride[1], - ctx.stride[0], - ctx.padding[1], - ctx.padding[0], - ctx.dilation[1], - ctx.dilation[0], - ctx.groups, - ctx.deformable_groups, - cur_im2col_step, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - if not grad_output.is_cuda: - raise NotImplementedError("Deformable Conv is not supported on CPUs!") - else: - cur_im2col_step = _DeformConv._cal_im2col_step(input.shape[0], ctx.im2col_step) - assert (input.shape[0] % cur_im2col_step) == 0, "im2col step must divide batchsize" - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - _C.deform_conv_backward_input( - input, - offset, - grad_output, - grad_input, - grad_offset, - weight, - ctx.bufs_[0], - weight.size(3), - weight.size(2), - ctx.stride[1], - ctx.stride[0], - ctx.padding[1], - ctx.padding[0], - ctx.dilation[1], - ctx.dilation[0], - ctx.groups, - ctx.deformable_groups, - cur_im2col_step, - ) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - _C.deform_conv_backward_filter( - input, - offset, - grad_output, - grad_weight, - ctx.bufs_[0], - ctx.bufs_[1], - weight.size(3), - weight.size(2), - ctx.stride[1], - ctx.stride[0], - ctx.padding[1], - ctx.padding[0], - ctx.dilation[1], - ctx.dilation[0], - ctx.groups, - ctx.deformable_groups, - 1, - cur_im2col_step, - ) - - return grad_input, grad_offset, grad_weight, None, None, None, None, None, None - - @staticmethod - def _output_size(input, weight, padding, dilation, stride): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = padding[d] - kernel = dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1,) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - "convolution input is too small (output would be {})".format( - "x".join(map(str, output_size)) - ) - ) - return output_size - - @staticmethod - @lru_cache(maxsize=128) - def _cal_im2col_step(input_size, default_size): - """ - Calculate proper im2col step size, which should be divisible by input_size and not larger - than prefer_size. Meanwhile the step size should be as large as possible to be more - efficient. So we choose the largest one among all divisors of input_size which are smaller - than prefer_size. - :param input_size: input batch size . - :param default_size: default preferred im2col step size. - :return: the largest proper step size. - """ - if input_size <= default_size: - return input_size - best_step = 1 - for step in range(2, min(int(math.sqrt(input_size)) + 1, default_size)): - if input_size % step == 0: - if input_size // step <= default_size: - return input_size // step - best_step = step - - return best_step - - -class _ModulatedDeformConv(Function): - @staticmethod - def forward( - ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - ): - ctx.stride = stride - ctx.padding = padding - ctx.dilation = dilation - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(1) # fake tensor - if not input.is_cuda: - raise NotImplementedError("Deformable Conv is not supported on CPUs!") - if ( - weight.requires_grad - or mask.requires_grad - or offset.requires_grad - or input.requires_grad - ): - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty(_ModulatedDeformConv._infer_shape(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - _C.modulated_deform_conv_forward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - output, - ctx._bufs[1], - weight.shape[2], - weight.shape[3], - ctx.stride, - ctx.stride, - ctx.padding, - ctx.padding, - ctx.dilation, - ctx.dilation, - ctx.groups, - ctx.deformable_groups, - ctx.with_bias, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - if not grad_output.is_cuda: - raise NotImplementedError("Deformable Conv is not supported on CPUs!") - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - _C.modulated_deform_conv_backward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - ctx._bufs[1], - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - weight.shape[2], - weight.shape[3], - ctx.stride, - ctx.stride, - ctx.padding, - ctx.padding, - ctx.dilation, - ctx.dilation, - ctx.groups, - ctx.deformable_groups, - ctx.with_bias, - ) - if not ctx.with_bias: - grad_bias = None - - return ( - grad_input, - grad_offset, - grad_mask, - grad_weight, - grad_bias, - None, - None, - None, - None, - None, - ) - - @staticmethod - def _infer_shape(ctx, input, weight): - n = input.size(0) - channels_out = weight.size(0) - height, width = input.shape[2:4] - kernel_h, kernel_w = weight.shape[2:4] - height_out = ( - height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1) - ) // ctx.stride + 1 - width_out = ( - width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1) - ) // ctx.stride + 1 - return n, channels_out, height_out, width_out - - -deform_conv = _DeformConv.apply -modulated_deform_conv = _ModulatedDeformConv.apply - - -class DeformConv(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=False, - norm=None, - activation=None, - ): - """ - Deformable convolution from :paper:`deformconv`. - - Arguments are similar to :class:`Conv2D`. Extra arguments: - - Args: - deformable_groups (int): number of groups used in deformable convolution. - norm (nn.Module, optional): a normalization layer - activation (callable(Tensor) -> Tensor): a callable activation function - """ - super(DeformConv, self).__init__() - - assert not bias - assert in_channels % groups == 0, "in_channels {} cannot be divisible by groups {}".format( - in_channels, groups - ) - assert ( - out_channels % groups == 0 - ), "out_channels {} cannot be divisible by groups {}".format(out_channels, groups) - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deformable_groups = deformable_groups - self.norm = norm - self.activation = activation - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size) - ) - self.bias = None - - nn.init.kaiming_uniform_(self.weight, nonlinearity="relu") - - def forward(self, x, offset): - if x.numel() == 0: - # When input is empty, we want to return a empty tensor with "correct" shape, - # So that the following operations will not panic - # if they check for the shape of the tensor. - # This computes the height and width of the output tensor - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // s + 1 - for i, p, di, k, s in zip( - x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride - ) - ] - output_shape = [x.shape[0], self.weight.shape[0]] + output_shape - return _NewEmptyTensorOp.apply(x, output_shape) - - x = deform_conv( - x, - offset, - self.weight, - self.stride, - self.padding, - self.dilation, - self.groups, - self.deformable_groups, - ) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - def extra_repr(self): - tmpstr = "in_channels=" + str(self.in_channels) - tmpstr += ", out_channels=" + str(self.out_channels) - tmpstr += ", kernel_size=" + str(self.kernel_size) - tmpstr += ", stride=" + str(self.stride) - tmpstr += ", padding=" + str(self.padding) - tmpstr += ", dilation=" + str(self.dilation) - tmpstr += ", groups=" + str(self.groups) - tmpstr += ", deformable_groups=" + str(self.deformable_groups) - tmpstr += ", bias=False" - return tmpstr - - -class ModulatedDeformConv(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True, - norm=None, - activation=None, - ): - """ - Modulated deformable convolution from :paper:`deformconv2`. - - Arguments are similar to :class:`Conv2D`. Extra arguments: - - Args: - deformable_groups (int): number of groups used in deformable convolution. - norm (nn.Module, optional): a normalization layer - activation (callable(Tensor) -> Tensor): a callable activation function - """ - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - self.norm = norm - self.activation = activation - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, *self.kernel_size) - ) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.bias = None - - nn.init.kaiming_uniform_(self.weight, nonlinearity="relu") - if self.bias is not None: - nn.init.constant_(self.bias, 0) - - def forward(self, x, offset, mask): - if x.numel() == 0: - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // s + 1 - for i, p, di, k, s in zip( - x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride - ) - ] - output_shape = [x.shape[0], self.weight.shape[0]] + output_shape - return _NewEmptyTensorOp.apply(x, output_shape) - - x = modulated_deform_conv( - x, - offset, - mask, - self.weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - self.deformable_groups, - ) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - def extra_repr(self): - tmpstr = "in_channels=" + str(self.in_channels) - tmpstr += ", out_channels=" + str(self.out_channels) - tmpstr += ", kernel_size=" + str(self.kernel_size) - tmpstr += ", stride=" + str(self.stride) - tmpstr += ", padding=" + str(self.padding) - tmpstr += ", dilation=" + str(self.dilation) - tmpstr += ", groups=" + str(self.groups) - tmpstr += ", deformable_groups=" + str(self.deformable_groups) - tmpstr += ", bias=" + str(self.with_bias) - return tmpstr diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/structures/keypoints.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/structures/keypoints.py deleted file mode 100644 index d0ee8724ac42087e4ec770a3dfb8e040a62b4c15..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/structures/keypoints.py +++ /dev/null @@ -1,239 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Any, List, Tuple, Union -import torch -from torch.nn import functional as F - - -class Keypoints: - """ - Stores keypoint **annotation** data. GT Instances have a `gt_keypoints` property - containing the x,y location and visibility flag of each keypoint. This tensor has shape - (N, K, 3) where N is the number of instances and K is the number of keypoints per instance. - - The visibility flag follows the COCO format and must be one of three integers: - - * v=0: not labeled (in which case x=y=0) - * v=1: labeled but not visible - * v=2: labeled and visible - """ - - def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[List[float]]]): - """ - Arguments: - keypoints: A Tensor, numpy array, or list of the x, y, and visibility of each keypoint. - The shape should be (N, K, 3) where N is the number of - instances, and K is the number of keypoints per instance. - """ - device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device("cpu") - keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device) - assert keypoints.dim() == 3 and keypoints.shape[2] == 3, keypoints.shape - self.tensor = keypoints - - def __len__(self) -> int: - return self.tensor.size(0) - - def to(self, *args: Any, **kwargs: Any) -> "Keypoints": - return type(self)(self.tensor.to(*args, **kwargs)) - - @property - def device(self) -> torch.device: - return self.tensor.device - - def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch.Tensor: - """ - Convert keypoint annotations to a heatmap of one-hot labels for training, - as described in :paper:`Mask R-CNN`. - - Arguments: - boxes: Nx4 tensor, the boxes to draw the keypoints to - - Returns: - heatmaps: - A tensor of shape (N, K), each element is integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: - A tensor of shape (N, K) containing whether each keypoint is in the roi or not. - """ - return _keypoints_to_heatmap(self.tensor, boxes, heatmap_size) - - def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Keypoints": - """ - Create a new `Keypoints` by indexing on this `Keypoints`. - - The following usage are allowed: - - 1. `new_kpts = kpts[3]`: return a `Keypoints` which contains only one instance. - 2. `new_kpts = kpts[2:10]`: return a slice of key points. - 3. `new_kpts = kpts[vector]`, where vector is a torch.ByteTensor - with `length = len(kpts)`. Nonzero elements in the vector will be selected. - - Note that the returned Keypoints might share storage with this Keypoints, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Keypoints([self.tensor[item]]) - return Keypoints(self.tensor[item]) - - def __repr__(self) -> str: - s = self.__class__.__name__ + "(" - s += "num_instances={})".format(len(self.tensor)) - return s - - @staticmethod - def cat(keypoints_list: List["Keypoints"]) -> "Keypoints": - """ - Concatenates a list of Keypoints into a single Keypoints - - Arguments: - keypoints_list (list[Keypoints]) - - Returns: - Keypoints: the concatenated Keypoints - """ - assert isinstance(keypoints_list, (list, tuple)) - assert len(keypoints_list) > 0 - assert all(isinstance(keypoints, Keypoints) for keypoints in keypoints_list) - - cat_kpts = type(keypoints_list[0])( - torch.cat([kpts.tensor for kpts in keypoints_list], dim=0) - ) - return cat_kpts - - -# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop) -def _keypoints_to_heatmap( - keypoints: torch.Tensor, rois: torch.Tensor, heatmap_size: int -) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Encode keypoint locations into a target heatmap for use in SoftmaxWithLoss across space. - - Maps keypoints from the half-open interval [x1, x2) on continuous image coordinates to the - closed interval [0, heatmap_size - 1] on discrete image coordinates. We use the - continuous-discrete conversion from Heckbert 1990 ("What is the coordinate of a pixel?"): - d = floor(c) and c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - - Arguments: - keypoints: tensor of keypoint locations in of shape (N, K, 3). - rois: Nx4 tensor of rois in xyxy format - heatmap_size: integer side length of square heatmap. - - Returns: - heatmaps: A tensor of shape (N, K) containing an integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: A tensor of shape (N, K) containing whether each keypoint is in - the roi or not. - """ - - if rois.numel() == 0: - return rois.new().long(), rois.new().long() - offset_x = rois[:, 0] - offset_y = rois[:, 1] - scale_x = heatmap_size / (rois[:, 2] - rois[:, 0]) - scale_y = heatmap_size / (rois[:, 3] - rois[:, 1]) - - offset_x = offset_x[:, None] - offset_y = offset_y[:, None] - scale_x = scale_x[:, None] - scale_y = scale_y[:, None] - - x = keypoints[..., 0] - y = keypoints[..., 1] - - x_boundary_inds = x == rois[:, 2][:, None] - y_boundary_inds = y == rois[:, 3][:, None] - - x = (x - offset_x) * scale_x - x = x.floor().long() - y = (y - offset_y) * scale_y - y = y.floor().long() - - x[x_boundary_inds] = heatmap_size - 1 - y[y_boundary_inds] = heatmap_size - 1 - - valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size) - vis = keypoints[..., 2] > 0 - valid = (valid_loc & vis).long() - - lin_ind = y * heatmap_size + x - heatmaps = lin_ind * valid - - return heatmaps, valid - - -@torch.jit.script_if_tracing -def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> torch.Tensor: - """ - Extract predicted keypoint locations from heatmaps. - - Args: - maps (Tensor): (#ROIs, #keypoints, POOL_H, POOL_W). The predicted heatmap of logits for - each ROI and each keypoint. - rois (Tensor): (#ROIs, 4). The box of each ROI. - - Returns: - Tensor of shape (#ROIs, #keypoints, 4) with the last dimension corresponding to - (x, y, logit, score) for each keypoint. - - When converting discrete pixel indices in an NxN image to a continuous keypoint coordinate, - we maintain consistency with :meth:`Keypoints.to_heatmap` by using the conversion from - Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - """ - # The decorator use of torch.no_grad() was not supported by torchscript. - # https://github.com/pytorch/pytorch/issues/44768 - maps = maps.detach() - rois = rois.detach() - - offset_x = rois[:, 0] - offset_y = rois[:, 1] - - widths = (rois[:, 2] - rois[:, 0]).clamp(min=1) - heights = (rois[:, 3] - rois[:, 1]).clamp(min=1) - widths_ceil = widths.ceil() - heights_ceil = heights.ceil() - - num_rois, num_keypoints = maps.shape[:2] - xy_preds = maps.new_zeros(rois.shape[0], num_keypoints, 4) - - width_corrections = widths / widths_ceil - height_corrections = heights / heights_ceil - - keypoints_idx = torch.arange(num_keypoints, device=maps.device) - - for i in range(num_rois): - outsize = (int(heights_ceil[i]), int(widths_ceil[i])) - roi_map = F.interpolate( - maps[[i]], size=outsize, mode="bicubic", align_corners=False - ).squeeze( - 0 - ) # #keypoints x H x W - - # softmax over the spatial region - max_score, _ = roi_map.view(num_keypoints, -1).max(1) - max_score = max_score.view(num_keypoints, 1, 1) - tmp_full_resolution = (roi_map - max_score).exp_() - tmp_pool_resolution = (maps[i] - max_score).exp_() - # Produce scores over the region H x W, but normalize with POOL_H x POOL_W, - # so that the scores of objects of different absolute sizes will be more comparable - roi_map_scores = tmp_full_resolution / tmp_pool_resolution.sum((1, 2), keepdim=True) - - w = roi_map.shape[2] - pos = roi_map.view(num_keypoints, -1).argmax(1) - - x_int = pos % w - y_int = (pos - x_int) // w - - assert ( - roi_map_scores[keypoints_idx, y_int, x_int] - == roi_map_scores.view(num_keypoints, -1).max(1)[0] - ).all() - - x = (x_int.float() + 0.5) * width_corrections[i] - y = (y_int.float() + 0.5) * height_corrections[i] - - xy_preds[i, :, 0] = x + offset_x[i] - xy_preds[i, :, 1] = y + offset_y[i] - xy_preds[i, :, 2] = roi_map[keypoints_idx, y_int, x_int] - xy_preds[i, :, 3] = roi_map_scores[keypoints_idx, y_int, x_int] - - return xy_preds diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/env.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/env.py deleted file mode 100644 index 40634c17c73273ac8927632be164f466cfe7d1fa..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/env.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib -import importlib.util -import logging -import numpy as np -import os -import random -import sys -from datetime import datetime -import torch - -__all__ = ["seed_all_rng"] - - -TORCH_VERSION = tuple(int(x) for x in torch.__version__.split(".")[:2]) -""" -PyTorch version as a tuple of 2 ints. Useful for comparison. -""" - - -DOC_BUILDING = os.getenv("_DOC_BUILDING", False) # set in docs/conf.py -""" -Whether we're building documentation. -""" - - -def seed_all_rng(seed=None): - """ - Set the random seed for the RNG in torch, numpy and python. - - Args: - seed (int): if None, will use a strong random seed. - """ - if seed is None: - seed = ( - os.getpid() - + int(datetime.now().strftime("%S%f")) - + int.from_bytes(os.urandom(2), "big") - ) - logger = logging.getLogger(__name__) - logger.info("Using a generated random seed {}".format(seed)) - np.random.seed(seed) - torch.manual_seed(seed) - random.seed(seed) - os.environ["PYTHONHASHSEED"] = str(seed) - - -# from https://stackoverflow.com/questions/67631/how-to-import-a-module-given-the-full-path -def _import_file(module_name, file_path, make_importable=False): - spec = importlib.util.spec_from_file_location(module_name, file_path) - module = importlib.util.module_from_spec(spec) - spec.loader.exec_module(module) - if make_importable: - sys.modules[module_name] = module - return module - - -def _configure_libraries(): - """ - Configurations for some libraries. - """ - # An environment option to disable `import cv2` globally, - # in case it leads to negative performance impact - disable_cv2 = int(os.environ.get("DETECTRON2_DISABLE_CV2", False)) - if disable_cv2: - sys.modules["cv2"] = None - else: - # Disable opencl in opencv since its interaction with cuda often has negative effects - # This envvar is supported after OpenCV 3.4.0 - os.environ["OPENCV_OPENCL_RUNTIME"] = "disabled" - try: - import cv2 - - if int(cv2.__version__.split(".")[0]) >= 3: - cv2.ocl.setUseOpenCL(False) - except ModuleNotFoundError: - # Other types of ImportError, if happened, should not be ignored. - # Because a failed opencv import could mess up address space - # https://github.com/skvark/opencv-python/issues/381 - pass - - def get_version(module, digit=2): - return tuple(map(int, module.__version__.split(".")[:digit])) - - # fmt: off - assert get_version(torch) >= (1, 4), "Requires torch>=1.4" - import fvcore - assert get_version(fvcore, 3) >= (0, 1, 2), "Requires fvcore>=0.1.2" - import yaml - assert get_version(yaml) >= (5, 1), "Requires pyyaml>=5.1" - # fmt: on - - -_ENV_SETUP_DONE = False - - -def setup_environment(): - """Perform environment setup work. The default setup is a no-op, but this - function allows the user to specify a Python source file or a module in - the $DETECTRON2_ENV_MODULE environment variable, that performs - custom setup work that may be necessary to their computing environment. - """ - global _ENV_SETUP_DONE - if _ENV_SETUP_DONE: - return - _ENV_SETUP_DONE = True - - _configure_libraries() - - custom_module_path = os.environ.get("DETECTRON2_ENV_MODULE") - - if custom_module_path: - setup_custom_environment(custom_module_path) - else: - # The default setup is a no-op - pass - - -def setup_custom_environment(custom_module): - """ - Load custom environment setup by importing a Python source file or a - module, and run the setup function. - """ - if custom_module.endswith(".py"): - module = _import_file("detectron2.utils.env.custom_module", custom_module) - else: - module = importlib.import_module(custom_module) - assert hasattr(module, "setup_environment") and callable(module.setup_environment), ( - "Custom environment module defined in {} does not have the " - "required callable attribute 'setup_environment'." - ).format(custom_module) - module.setup_environment() - - -def fixup_module_metadata(module_name, namespace, keys=None): - """ - Fix the __qualname__ of module members to be their exported api name, so - when they are referenced in docs, sphinx can find them. Reference: - https://github.com/python-trio/trio/blob/6754c74eacfad9cc5c92d5c24727a2f3b620624e/trio/_util.py#L216-L241 - """ - if not DOC_BUILDING: - return - seen_ids = set() - - def fix_one(qualname, name, obj): - # avoid infinite recursion (relevant when using - # typing.Generic, for example) - if id(obj) in seen_ids: - return - seen_ids.add(id(obj)) - - mod = getattr(obj, "__module__", None) - if mod is not None and (mod.startswith(module_name) or mod.startswith("fvcore.")): - obj.__module__ = module_name - # Modules, unlike everything else in Python, put fully-qualitied - # names into their __name__ attribute. We check for "." to avoid - # rewriting these. - if hasattr(obj, "__name__") and "." not in obj.__name__: - obj.__name__ = name - obj.__qualname__ = qualname - if isinstance(obj, type): - for attr_name, attr_value in obj.__dict__.items(): - fix_one(objname + "." + attr_name, attr_name, attr_value) - - if keys is None: - keys = namespace.keys() - for objname in keys: - if not objname.startswith("_"): - obj = namespace[objname] - fix_one(objname, objname, obj) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/intern_action.py b/spaces/OpenGVLab/InternGPT/iGPT/models/intern_action.py deleted file mode 100644 index f60f6df1748bce94c4ebaed85619c403b2ec33ab..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/intern_action.py +++ /dev/null @@ -1,510 +0,0 @@ -#!/usr/bin/env python -import os -from collections import OrderedDict - -from timm.models.layers import DropPath -import torch -from torch import nn -from torch.nn import MultiheadAttention -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint - - -MODEL_PATH = './' -_MODELS = { - "ViT-B/16": os.path.join(MODEL_PATH, "vit_b16.pth"), - "ViT-L/14": os.path.join(MODEL_PATH, "vit_l14.pth"), - "ViT-L/14_336": os.path.join(MODEL_PATH, "vit_l14_336.pth"), -} - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(1.702 * x) - - -class Local_MHRA(nn.Module): - def __init__(self, d_model, dw_reduction=1.5, pos_kernel_size=3): - super().__init__() - - padding = pos_kernel_size // 2 - re_d_model = int(d_model // dw_reduction) - self.pos_embed = nn.Sequential( - nn.BatchNorm3d(d_model), - nn.Conv3d(d_model, re_d_model, kernel_size=1, stride=1, padding=0), - nn.Conv3d(re_d_model, re_d_model, kernel_size=(pos_kernel_size, 1, 1), stride=(1, 1, 1), padding=(padding, 0, 0), groups=re_d_model), - nn.Conv3d(re_d_model, d_model, kernel_size=1, stride=1, padding=0), - ) - - # init zero - print('Init zero for Conv in pos_emb') - nn.init.constant_(self.pos_embed[3].weight, 0) - nn.init.constant_(self.pos_embed[3].bias, 0) - - def forward(self, x): - return self.pos_embed(x) - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, d_model, n_head, attn_mask=None, drop_path=0.0, - dw_reduction=1.5, no_lmhra=False, double_lmhra=True - ): - super().__init__() - - self.n_head = n_head - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - print(f'Drop path rate: {drop_path}') - - self.no_lmhra = no_lmhra - self.double_lmhra = double_lmhra - print(f'No L_MHRA: {no_lmhra}') - print(f'Double L_MHRA: {double_lmhra}') - if not no_lmhra: - self.lmhra1 = Local_MHRA(d_model, dw_reduction=dw_reduction) - if double_lmhra: - self.lmhra2 = Local_MHRA(d_model, dw_reduction=dw_reduction) - - # spatial - self.attn = MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x, T=8, use_checkpoint=False): - # x: 1+HW, NT, C - if not self.no_lmhra: - # Local MHRA - tmp_x = x[1:, :, :] - L, NT, C = tmp_x.shape - N = NT // T - H = W = int(L ** 0.5) - tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous() - tmp_x = tmp_x + self.drop_path(self.lmhra1(tmp_x)) - tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C) - x = torch.cat([x[:1, :, :], tmp_x], dim=0) - # MHSA - if use_checkpoint: - attn_out = checkpoint.checkpoint(self.attention, self.ln_1(x)) - x = x + self.drop_path(attn_out) - else: - x = x + self.drop_path(self.attention(self.ln_1(x))) - # Local MHRA - if not self.no_lmhra and self.double_lmhra: - tmp_x = x[1:, :, :] - tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous() - tmp_x = tmp_x + self.drop_path(self.lmhra2(tmp_x)) - tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C) - x = torch.cat([x[:1, :, :], tmp_x], dim=0) - # FFN - if use_checkpoint: - mlp_out = checkpoint.checkpoint(self.mlp, self.ln_2(x)) - x = x + self.drop_path(mlp_out) - else: - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Extractor(nn.Module): - def __init__( - self, d_model, n_head, attn_mask=None, - mlp_factor=4.0, dropout=0.0, drop_path=0.0, - ): - super().__init__() - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - print(f'Drop path rate: {drop_path}') - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = nn.LayerNorm(d_model) - d_mlp = round(mlp_factor * d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_mlp)), - ("gelu", QuickGELU()), - ("dropout", nn.Dropout(dropout)), - ("c_proj", nn.Linear(d_mlp, d_model)) - ])) - self.ln_2 = nn.LayerNorm(d_model) - self.ln_3 = nn.LayerNorm(d_model) - self.attn_mask = attn_mask - - # zero init - nn.init.xavier_uniform_(self.attn.in_proj_weight) - nn.init.constant_(self.attn.out_proj.weight, 0.) - nn.init.constant_(self.attn.out_proj.bias, 0.) - nn.init.xavier_uniform_(self.mlp[0].weight) - nn.init.constant_(self.mlp[-1].weight, 0.) - nn.init.constant_(self.mlp[-1].bias, 0.) - - def attention(self, x, y): - d_model = self.ln_1.weight.size(0) - q = (x @ self.attn.in_proj_weight[:d_model].T) + self.attn.in_proj_bias[:d_model] - - k = (y @ self.attn.in_proj_weight[d_model:-d_model].T) + self.attn.in_proj_bias[d_model:-d_model] - v = (y @ self.attn.in_proj_weight[-d_model:].T) + self.attn.in_proj_bias[-d_model:] - Tx, Ty, N = q.size(0), k.size(0), q.size(1) - q = q.view(Tx, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - k = k.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - v = v.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - aff = (q @ k.transpose(-2, -1) / (self.attn.head_dim ** 0.5)) - - aff = aff.softmax(dim=-1) - out = aff @ v - out = out.permute(2, 0, 1, 3).flatten(2) - out = self.attn.out_proj(out) - return out - - def forward(self, x, y): - x = x + self.drop_path(self.attention(self.ln_1(x), self.ln_3(y))) - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Transformer(nn.Module): - def __init__( - self, width, layers, heads, attn_mask=None, backbone_drop_path_rate=0., - use_checkpoint=False, checkpoint_num=[0], t_size=8, dw_reduction=2, - no_lmhra=False, double_lmhra=True, - return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], - n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, - ): - super().__init__() - self.T = t_size - self.return_list = return_list - # backbone - b_dpr = [x.item() for x in torch.linspace(0, backbone_drop_path_rate, layers)] - self.resblocks = nn.ModuleList([ - ResidualAttentionBlock( - width, heads, attn_mask, - drop_path=b_dpr[i], - dw_reduction=dw_reduction, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - ) for i in range(layers) - ]) - # checkpoint - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.n_layers = n_layers - print(f'Use checkpoint: {self.use_checkpoint}') - print(f'Checkpoint number: {self.checkpoint_num}') - - # global block - assert n_layers == len(return_list) - if n_layers > 0: - self.temporal_cls_token = nn.Parameter(torch.zeros(1, 1, n_dim)) - self.dpe = nn.ModuleList([ - nn.Conv3d(n_dim, n_dim, kernel_size=3, stride=1, padding=1, bias=True, groups=n_dim) - for i in range(n_layers) - ]) - for m in self.dpe: - nn.init.constant_(m.bias, 0.) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, n_layers)] - self.dec = nn.ModuleList([ - Extractor( - n_dim, n_head, mlp_factor=mlp_factor, - dropout=mlp_dropout[i], drop_path=dpr[i], - ) for i in range(n_layers) - ]) - self.balance = nn.Parameter(torch.zeros((n_dim))) - self.sigmoid = nn.Sigmoid() - # projection - self.proj = nn.Sequential( - nn.LayerNorm(n_dim), - nn.Dropout(cls_dropout), - nn.Linear(n_dim, num_classes), - ) - - def forward(self, x): - T_down = self.T - L, NT, C = x.shape - N = NT // T_down - H = W = int((L - 1) ** 0.5) - - if self.n_layers > 0: - cls_token = self.temporal_cls_token.repeat(1, N, 1) - - j = -1 - for i, resblock in enumerate(self.resblocks): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = resblock(x, self.T, use_checkpoint=True) - else: - x = resblock(x, T_down) - if i in self.return_list: - j += 1 - tmp_x = x.clone() - tmp_x = tmp_x.view(L, N, T_down, C) - # dpe - _, tmp_feats = tmp_x[:1], tmp_x[1:] - tmp_feats = tmp_feats.permute(1, 3, 2, 0).reshape(N, C, T_down, H, W) - tmp_feats = self.dpe[j](tmp_feats).view(N, C, T_down, L - 1).permute(3, 0, 2, 1).contiguous() - tmp_x[1:] = tmp_x[1:] + tmp_feats - # global block - tmp_x = tmp_x.permute(2, 0, 1, 3).flatten(0, 1) # T * L, N, C - cls_token = self.dec[j](cls_token, tmp_x) - - if self.n_layers > 0: - weight = self.sigmoid(self.balance) - residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C - return self.proj((1 - weight) * cls_token[0, :, :] + weight * residual) - else: - residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C - return self.proj(residual) - - -class VisionTransformer(nn.Module): - def __init__( - self, - # backbone - input_resolution, patch_size, width, layers, heads, output_dim, backbone_drop_path_rate=0., - use_checkpoint=False, checkpoint_num=[0], t_size=8, kernel_size=3, dw_reduction=1.5, - temporal_downsample=True, - no_lmhra=-False, double_lmhra=True, - # global block - return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], - n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, - ): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - padding = (kernel_size - 1) // 2 - if temporal_downsample: - self.conv1 = nn.Conv3d(3, width, (kernel_size, patch_size, patch_size), (2, patch_size, patch_size), (padding, 0, 0), bias=False) - t_size = t_size // 2 - else: - self.conv1 = nn.Conv3d(3, width, (1, patch_size, patch_size), (1, patch_size, patch_size), (0, 0, 0), bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer( - width, layers, heads, dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - use_checkpoint=use_checkpoint, checkpoint_num=checkpoint_num, t_size=t_size, - no_lmhra=no_lmhra, double_lmhra=double_lmhra, - return_list=return_list, n_layers=n_layers, n_dim=n_dim, n_head=n_head, - mlp_factor=mlp_factor, drop_path_rate=drop_path_rate, mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, num_classes=num_classes, - ) - - def forward(self, x): - x = self.conv1(x) # shape = [*, width, grid, grid] - N, C, T, H, W = x.shape - x = x.permute(0, 2, 3, 4, 1).reshape(N * T, H * W, C) - - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - out = self.transformer(x) - return out - - -def inflate_weight(weight_2d, time_dim, center=True): - print(f'Init center: {center}') - if center: - weight_3d = torch.zeros(*weight_2d.shape) - weight_3d = weight_3d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1) - middle_idx = time_dim // 2 - weight_3d[:, :, middle_idx, :, :] = weight_2d - else: - weight_3d = weight_2d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1) - weight_3d = weight_3d / time_dim - return weight_3d - - -def load_state_dict(model, state_dict): - state_dict_3d = model.state_dict() - for k in state_dict.keys(): - if state_dict[k].shape != state_dict_3d[k].shape: - if len(state_dict_3d[k].shape) <= 2: - print(f'Ignore: {k}') - continue - print(f'Inflate: {k}, {state_dict[k].shape} => {state_dict_3d[k].shape}') - time_dim = state_dict_3d[k].shape[2] - state_dict[k] = inflate_weight(state_dict[k], time_dim) - model.load_state_dict(state_dict, strict=False) - - -def intern_action_b16( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[8, 9, 10, 11], - n_layers=4, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=224, - patch_size=16, - width=768, - layers=12, - heads=12, - output_dim=512, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - temporal_downsample=temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-B/16"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -def intern_action_l14( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[20, 21, 22, 23], - n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=224, - patch_size=14, - width=1024, - layers=24, - heads=16, - output_dim=768, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - temporal_downsample=temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-L/14"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -def intern_action_l14_336( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - no_temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[20, 21, 22, 23], - n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=336, - patch_size=14, - width=1024, - layers=24, - heads=16, - output_dim=768, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - no_temporal_downsample=no_temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-L/14_336"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -if __name__ == '__main__': - import time - from fvcore.nn import FlopCountAnalysis - from fvcore.nn import flop_count_table - import numpy as np - - seed = 4217 - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - num_frames = 16 - - model = intern_action_l14( - pretrained=False, - t_size=num_frames, backbone_drop_path_rate=0., drop_path_rate=0., - dw_reduction=1.5, - no_lmhra=False, - temporal_downsample=True, - return_list=[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], - mlp_dropout=[0.5]*16, - n_layers=16 - ) - print(model) - - flops = FlopCountAnalysis(model, torch.rand(1, 3, num_frames, 224, 224)) - s = time.time() - print(flop_count_table(flops, max_depth=1)) - print(time.time()-s) diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/joints2jfeats/__init__.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/joints2jfeats/__init__.py deleted file mode 100644 index 0a924e845912842ec042b5b3195b8da7aee3f252..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/joints2jfeats/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .base import Joints2Jfeats -from .rifke import Rifke diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/scheme/decompile-tree-il.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/scheme/decompile-tree-il.go deleted file mode 100644 index 999169fd02a08164c513ec9c0f31da19c34bb368..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/scheme/decompile-tree-il.go and /dev/null differ diff --git a/spaces/Podtekatel/Avatar2VSK/inference/model_pipeline.py b/spaces/Podtekatel/Avatar2VSK/inference/model_pipeline.py deleted file mode 100644 index d03117f9e420367e0733f64ff046c178f147bfbe..0000000000000000000000000000000000000000 --- a/spaces/Podtekatel/Avatar2VSK/inference/model_pipeline.py +++ /dev/null @@ -1,115 +0,0 @@ -import logging -import time - -import cv2 -import numpy as np - -from .center_crop import center_crop -from .face_detector import FaceDetector - - -class VSNetModelPipeline: - def __init__(self, model, face_detector: FaceDetector, background_resize=720, no_detected_resize=256, use_cloning=True): - self.background_resize = background_resize - self.no_detected_resize = no_detected_resize - self.model = model - self.face_detector = face_detector - self.mask = self.create_circular_mask(face_detector.target_size, face_detector.target_size) - self.use_cloning = use_cloning - - @staticmethod - def create_circular_mask(h, w, power=None, clipping_coef=0.85): - center = (int(w / 2), int(h / 2)) - - Y, X = np.ogrid[:h, :w] - dist_from_center = np.sqrt((X - center[0]) ** 2 + (Y - center[1]) ** 2) - print(dist_from_center.max(), dist_from_center.min()) - clipping_radius = min((h - center[0]), (w - center[1])) * clipping_coef - max_size = max((h - center[0]), (w - center[1])) - dist_from_center[dist_from_center < clipping_radius] = clipping_radius - dist_from_center[dist_from_center > max_size] = max_size - max_distance, min_distance = np.max(dist_from_center), np.min(dist_from_center) - dist_from_center = 1 - (dist_from_center - min_distance) / (max_distance - min_distance) - if power is not None: - dist_from_center = np.power(dist_from_center, power) - dist_from_center = np.stack([dist_from_center] * 3, axis=2) - # mask = dist_from_center <= radius - return dist_from_center - - - @staticmethod - def resize_size(image, size=720, always_apply=True): - h, w, c = np.shape(image) - if min(h, w) > size or always_apply: - if h < w: - h, w = int(size * h / w), size - else: - h, w = size, int(size * w / h) - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) - return image - - def normalize(self, img): - img = img.astype(np.float32) / 255 * 2 - 1 - return img - - def denormalize(self, img): - return (img + 1) / 2 - - def divide_crop(self, img, must_divided=32): - h, w, _ = img.shape - h = h // must_divided * must_divided - w = w // must_divided * must_divided - - img = center_crop(img, h, w) - return img - - def merge_crops(self, faces_imgs, crops, full_image): - for face, crop in zip(faces_imgs, crops): - x1, y1, x2, y2 = crop - W, H = x2 - x1, y2 - y1 - result_face = cv2.resize(face, (W, H), interpolation=cv2.INTER_LINEAR) - face_mask = cv2.resize(self.mask, (W, H), interpolation=cv2.INTER_LINEAR) - if self.use_cloning: - center = round((x2 + x1) / 2), round((y2 + y1) / 2) - full_image = cv2.seamlessClone(result_face, full_image, (face_mask > 0.0).astype(np.uint8) * 255, center, cv2.NORMAL_CLONE) - else: - input_face = full_image[y1: y2, x1: x2] - full_image[y1: y2, x1: x2] = (result_face * face_mask + input_face * (1 - face_mask)).astype(np.uint8) - return full_image - - def __call__(self, img): - return self.process_image(img) - - def process_image(self, img): - img = self.resize_size(img, size=self.background_resize) - img = self.divide_crop(img) - - face_crops, coords = self.face_detector(img) - - if len(face_crops) > 0: - start_time = time.time() - faces = self.normalize(face_crops) - faces = faces.transpose(0, 3, 1, 2) - out_faces = self.model(faces) - out_faces = self.denormalize(out_faces) - out_faces = out_faces.transpose(0, 2, 3, 1) - out_faces = np.clip(out_faces * 255, 0, 255).astype(np.uint8) - end_time = time.time() - logging.info(f'Face FPS {1 / (end_time - start_time)}') - else: - out_faces = [] - img = self.resize_size(img, size=self.no_detected_resize) - img = self.divide_crop(img) - - start_time = time.time() - full_image = self.normalize(img) - full_image = np.expand_dims(full_image, 0).transpose(0, 3, 1, 2) - full_image = self.model(full_image) - full_image = self.denormalize(full_image) - full_image = full_image.transpose(0, 2, 3, 1) - full_image = np.clip(full_image * 255, 0, 255).astype(np.uint8) - end_time = time.time() - logging.info(f'Background FPS {1 / (end_time - start_time)}') - - result_image = self.merge_crops(out_faces, coords, full_image[0]) - return result_image diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py deleted file mode 100644 index 12f6d402a3c4a113d4c37be062790fa435b72104..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Evaluation with objective metrics for the pretrained AudioGen models. -This grid takes signature from the training grid and runs evaluation-only stage. - -When running the grid for the first time, please use: -REGEN=1 dora grid audiogen.audiogen_pretrained_16khz_eval -and re-use the REGEN=1 option when the grid is changed to force regenerating it. - -Note that you need the proper metrics external libraries setup to use all -the objective metrics activated in this grid. Refer to the README for more information. -""" - -import os - -from ..musicgen._explorers import GenerationEvalExplorer -from ...environment import AudioCraftEnvironment -from ... import train - - -def eval(launcher, batch_size: int = 32): - opts = { - 'dset': 'audio/audiocaps_16khz', - 'solver/audiogen/evaluation': 'objective_eval', - 'execute_only': 'evaluate', - '+dataset.evaluate.batch_size': batch_size, - '+metrics.fad.tf.batch_size': 32, - } - # binary for FAD computation: replace this path with your own path - metrics_opts = { - 'metrics.fad.tf.bin': '/data/home/jadecopet/local/usr/opt/google-research' - } - opt1 = {'generate.lm.use_sampling': True, 'generate.lm.top_k': 250, 'generate.lm.top_p': 0.} - opt2 = {'transformer_lm.two_step_cfg': True} - - sub = launcher.bind(opts) - sub.bind_(metrics_opts) - - # base objective metrics - sub(opt1, opt2) - - -@GenerationEvalExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=4, partition=partitions) - - if 'REGEN' not in os.environ: - folder = train.main.dora.dir / 'grids' / __name__.split('.', 2)[-1] - with launcher.job_array(): - for sig in folder.iterdir(): - if not sig.is_symlink(): - continue - xp = train.main.get_xp_from_sig(sig.name) - launcher(xp.argv) - return - - audiogen_base = launcher.bind(solver="audiogen/audiogen_base_16khz") - audiogen_base.bind_({'autocast': False, 'fsdp.use': True}) - - audiogen_base_medium = audiogen_base.bind({'continue_from': '//pretrained/facebook/audiogen-medium'}) - audiogen_base_medium.bind_({'model/lm/model_scale': 'medium'}) - eval(audiogen_base_medium, batch_size=128) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/templates/base.html b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/templates/base.html deleted file mode 100644 index f74668c19ecb83090a8a2d82c026bf417190ec6d..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/templates/base.html +++ /dev/null @@ -1,16 +0,0 @@ - - - - {% block head %} - - - AudioCraft — MOS - {% endblock %} - - -
              -

              AudioCraft — MOS

              - {% block content %}{% endblock %} -
              - - diff --git a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/QuanLingZ/ChatResponse/README.md b/spaces/QuanLingZ/ChatResponse/README.md deleted file mode 100644 index 64c0e396daefd18b402f95957882bcf62bb838a1..0000000000000000000000000000000000000000 --- a/spaces/QuanLingZ/ChatResponse/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatResponse -emoji: 🐠 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RMXK/RVC_HFF/demucs/parser.py b/spaces/RMXK/RVC_HFF/demucs/parser.py deleted file mode 100644 index 4e8a19cf976e3c6dfe411da64b8dce3e9a4548e0..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/demucs/parser.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -from pathlib import Path - - -def get_parser(): - parser = argparse.ArgumentParser("demucs", description="Train and evaluate Demucs.") - default_raw = None - default_musdb = None - if 'DEMUCS_RAW' in os.environ: - default_raw = Path(os.environ['DEMUCS_RAW']) - if 'DEMUCS_MUSDB' in os.environ: - default_musdb = Path(os.environ['DEMUCS_MUSDB']) - parser.add_argument( - "--raw", - type=Path, - default=default_raw, - help="Path to raw audio, can be faster, see python3 -m demucs.raw to extract.") - parser.add_argument("--no_raw", action="store_const", const=None, dest="raw") - parser.add_argument("-m", - "--musdb", - type=Path, - default=default_musdb, - help="Path to musdb root") - parser.add_argument("--is_wav", action="store_true", - help="Indicate that the MusDB dataset is in wav format (i.e. MusDB-HQ).") - parser.add_argument("--metadata", type=Path, default=Path("metadata/"), - help="Folder where metadata information is stored.") - parser.add_argument("--wav", type=Path, - help="Path to a wav dataset. This should contain a 'train' and a 'valid' " - "subfolder.") - parser.add_argument("--samplerate", type=int, default=44100) - parser.add_argument("--audio_channels", type=int, default=2) - parser.add_argument("--samples", - default=44100 * 10, - type=int, - help="number of samples to feed in") - parser.add_argument("--data_stride", - default=44100, - type=int, - help="Stride for chunks, shorter = longer epochs") - parser.add_argument("-w", "--workers", default=10, type=int, help="Loader workers") - parser.add_argument("--eval_workers", default=2, type=int, help="Final evaluation workers") - parser.add_argument("-d", - "--device", - help="Device to train on, default is cuda if available else cpu") - parser.add_argument("--eval_cpu", action="store_true", help="Eval on test will be run on cpu.") - parser.add_argument("--dummy", help="Dummy parameter, useful to create a new checkpoint file") - parser.add_argument("--test", help="Just run the test pipeline + one validation. " - "This should be a filename relative to the models/ folder.") - parser.add_argument("--test_pretrained", help="Just run the test pipeline + one validation, " - "on a pretrained model. ") - - parser.add_argument("--rank", default=0, type=int) - parser.add_argument("--world_size", default=1, type=int) - parser.add_argument("--master") - - parser.add_argument("--checkpoints", - type=Path, - default=Path("checkpoints"), - help="Folder where to store checkpoints etc") - parser.add_argument("--evals", - type=Path, - default=Path("evals"), - help="Folder where to store evals and waveforms") - parser.add_argument("--save", - action="store_true", - help="Save estimated for the test set waveforms") - parser.add_argument("--logs", - type=Path, - default=Path("logs"), - help="Folder where to store logs") - parser.add_argument("--models", - type=Path, - default=Path("models"), - help="Folder where to store trained models") - parser.add_argument("-R", - "--restart", - action='store_true', - help='Restart training, ignoring previous run') - - parser.add_argument("--seed", type=int, default=42) - parser.add_argument("-e", "--epochs", type=int, default=180, help="Number of epochs") - parser.add_argument("-r", - "--repeat", - type=int, - default=2, - help="Repeat the train set, longer epochs") - parser.add_argument("-b", "--batch_size", type=int, default=64) - parser.add_argument("--lr", type=float, default=3e-4) - parser.add_argument("--mse", action="store_true", help="Use MSE instead of L1") - parser.add_argument("--init", help="Initialize from a pre-trained model.") - - # Augmentation options - parser.add_argument("--no_augment", - action="store_false", - dest="augment", - default=True, - help="No basic data augmentation.") - parser.add_argument("--repitch", type=float, default=0.2, - help="Probability to do tempo/pitch change") - parser.add_argument("--max_tempo", type=float, default=12, - help="Maximum relative tempo change in %% when using repitch.") - - parser.add_argument("--remix_group_size", - type=int, - default=4, - help="Shuffle sources using group of this size. Useful to somewhat " - "replicate multi-gpu training " - "on less GPUs.") - parser.add_argument("--shifts", - type=int, - default=10, - help="Number of random shifts used for the shift trick.") - parser.add_argument("--overlap", - type=float, - default=0.25, - help="Overlap when --split_valid is passed.") - - # See model.py for doc - parser.add_argument("--growth", - type=float, - default=2., - help="Number of channels between two layers will increase by this factor") - parser.add_argument("--depth", - type=int, - default=6, - help="Number of layers for the encoder and decoder") - parser.add_argument("--lstm_layers", type=int, default=2, help="Number of layers for the LSTM") - parser.add_argument("--channels", - type=int, - default=64, - help="Number of channels for the first encoder layer") - parser.add_argument("--kernel_size", - type=int, - default=8, - help="Kernel size for the (transposed) convolutions") - parser.add_argument("--conv_stride", - type=int, - default=4, - help="Stride for the (transposed) convolutions") - parser.add_argument("--context", - type=int, - default=3, - help="Context size for the decoder convolutions " - "before the transposed convolutions") - parser.add_argument("--rescale", - type=float, - default=0.1, - help="Initial weight rescale reference") - parser.add_argument("--no_resample", action="store_false", - default=True, dest="resample", - help="No Resampling of the input/output x2") - parser.add_argument("--no_glu", - action="store_false", - default=True, - dest="glu", - help="Replace all GLUs by ReLUs") - parser.add_argument("--no_rewrite", - action="store_false", - default=True, - dest="rewrite", - help="No 1x1 rewrite convolutions") - parser.add_argument("--normalize", action="store_true") - parser.add_argument("--no_norm_wav", action="store_false", dest='norm_wav', default=True) - - # Tasnet options - parser.add_argument("--tasnet", action="store_true") - parser.add_argument("--split_valid", - action="store_true", - help="Predict chunks by chunks for valid and test. Required for tasnet") - parser.add_argument("--X", type=int, default=8) - - # Other options - parser.add_argument("--show", - action="store_true", - help="Show model architecture, size and exit") - parser.add_argument("--save_model", action="store_true", - help="Skip traning, just save final model " - "for the current checkpoint value.") - parser.add_argument("--save_state", - help="Skip training, just save state " - "for the current checkpoint value. You should " - "provide a model name as argument.") - - # Quantization options - parser.add_argument("--q-min-size", type=float, default=1, - help="Only quantize layers over this size (in MB)") - parser.add_argument( - "--qat", type=int, help="If provided, use QAT training with that many bits.") - - parser.add_argument("--diffq", type=float, default=0) - parser.add_argument( - "--ms-target", type=float, default=162, - help="Model size target in MB, when using DiffQ. Best model will be kept " - "only if it is smaller than this target.") - - return parser - - -def get_name(parser, args): - """ - Return the name of an experiment given the args. Some parameters are ignored, - for instance --workers, as they do not impact the final result. - """ - ignore_args = set([ - "checkpoints", - "deterministic", - "eval", - "evals", - "eval_cpu", - "eval_workers", - "logs", - "master", - "rank", - "restart", - "save", - "save_model", - "save_state", - "show", - "workers", - "world_size", - ]) - parts = [] - name_args = dict(args.__dict__) - for name, value in name_args.items(): - if name in ignore_args: - continue - if value != parser.get_default(name): - if isinstance(value, Path): - parts.append(f"{name}={value.name}") - else: - parts.append(f"{name}={value}") - if parts: - name = " ".join(parts) - else: - name = "default" - return name diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/distributions/installed.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/distributions/installed.py deleted file mode 100644 index edb38aa1a6c54dcb73e2f74b6bdfff337841d99f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/distributions/installed.py +++ /dev/null @@ -1,23 +0,0 @@ -from pip._internal.distributions.base import AbstractDistribution -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata import BaseDistribution - - -class InstalledDistribution(AbstractDistribution): - """Represents an installed package. - - This does not need any preparation as the required information has already - been computed. - """ - - def get_metadata_distribution(self) -> BaseDistribution: - assert self.req.satisfied_by is not None, "not actually installed" - return self.req.satisfied_by - - def prepare_distribution_metadata( - self, - finder: PackageFinder, - build_isolation: bool, - check_build_deps: bool, - ) -> None: - pass diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tomli/_re.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tomli/_re.py deleted file mode 100644 index 994bb7493fd92865e6ab87c277ba5741b44c31a9..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tomli/_re.py +++ /dev/null @@ -1,107 +0,0 @@ -# SPDX-License-Identifier: MIT -# SPDX-FileCopyrightText: 2021 Taneli Hukkinen -# Licensed to PSF under a Contributor Agreement. - -from __future__ import annotations - -from datetime import date, datetime, time, timedelta, timezone, tzinfo -from functools import lru_cache -import re -from typing import Any - -from ._types import ParseFloat - -# E.g. -# - 00:32:00.999999 -# - 00:32:00 -_TIME_RE_STR = r"([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(?:\.([0-9]{1,6})[0-9]*)?" - -RE_NUMBER = re.compile( - r""" -0 -(?: - x[0-9A-Fa-f](?:_?[0-9A-Fa-f])* # hex - | - b[01](?:_?[01])* # bin - | - o[0-7](?:_?[0-7])* # oct -) -| -[+-]?(?:0|[1-9](?:_?[0-9])*) # dec, integer part -(?P - (?:\.[0-9](?:_?[0-9])*)? # optional fractional part - (?:[eE][+-]?[0-9](?:_?[0-9])*)? # optional exponent part -) -""", - flags=re.VERBOSE, -) -RE_LOCALTIME = re.compile(_TIME_RE_STR) -RE_DATETIME = re.compile( - rf""" -([0-9]{{4}})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01]) # date, e.g. 1988-10-27 -(?: - [Tt ] - {_TIME_RE_STR} - (?:([Zz])|([+-])([01][0-9]|2[0-3]):([0-5][0-9]))? # optional time offset -)? -""", - flags=re.VERBOSE, -) - - -def match_to_datetime(match: re.Match) -> datetime | date: - """Convert a `RE_DATETIME` match to `datetime.datetime` or `datetime.date`. - - Raises ValueError if the match does not correspond to a valid date - or datetime. - """ - ( - year_str, - month_str, - day_str, - hour_str, - minute_str, - sec_str, - micros_str, - zulu_time, - offset_sign_str, - offset_hour_str, - offset_minute_str, - ) = match.groups() - year, month, day = int(year_str), int(month_str), int(day_str) - if hour_str is None: - return date(year, month, day) - hour, minute, sec = int(hour_str), int(minute_str), int(sec_str) - micros = int(micros_str.ljust(6, "0")) if micros_str else 0 - if offset_sign_str: - tz: tzinfo | None = cached_tz( - offset_hour_str, offset_minute_str, offset_sign_str - ) - elif zulu_time: - tz = timezone.utc - else: # local date-time - tz = None - return datetime(year, month, day, hour, minute, sec, micros, tzinfo=tz) - - -@lru_cache(maxsize=None) -def cached_tz(hour_str: str, minute_str: str, sign_str: str) -> timezone: - sign = 1 if sign_str == "+" else -1 - return timezone( - timedelta( - hours=sign * int(hour_str), - minutes=sign * int(minute_str), - ) - ) - - -def match_to_localtime(match: re.Match) -> time: - hour_str, minute_str, sec_str, micros_str = match.groups() - micros = int(micros_str.ljust(6, "0")) if micros_str else 0 - return time(int(hour_str), int(minute_str), int(sec_str), micros) - - -def match_to_number(match: re.Match, parse_float: ParseFloat) -> Any: - if match.group("floatpart"): - return parse_float(match.group()) - return int(match.group(), 0) diff --git a/spaces/Reeve/Ohayou_Face/utils/common.py b/spaces/Reeve/Ohayou_Face/utils/common.py deleted file mode 100644 index 4813fe311ee40720697e4862c5fbfad811d39237..0000000000000000000000000000000000000000 --- a/spaces/Reeve/Ohayou_Face/utils/common.py +++ /dev/null @@ -1,87 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image -import matplotlib.pyplot as plt - - -# Log images -def log_input_image(x, opts): - if opts.label_nc == 0: - return tensor2im(x) - elif opts.label_nc == 1: - return tensor2sketch(x) - else: - return tensor2map(x) - - -def tensor2im(var): - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def tensor2map(var): - mask = np.argmax(var.data.cpu().numpy(), axis=0) - colors = get_colors() - mask_image = np.ones(shape=(mask.shape[0], mask.shape[1], 3)) - for class_idx in np.unique(mask): - mask_image[mask == class_idx] = colors[class_idx] - mask_image = mask_image.astype('uint8') - return Image.fromarray(mask_image) - - -def tensor2sketch(var): - im = var[0].cpu().detach().numpy() - im = cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) - im = (im * 255).astype(np.uint8) - return Image.fromarray(im) - - -# Visualization utils -def get_colors(): - # currently support up to 19 classes (for the celebs-hq-mask dataset) - colors = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], - [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], - [255, 51, 153], [0, 204, 204], [0, 51, 0], [255, 153, 51], [0, 204, 0]] - return colors - - -def vis_faces(log_hooks): - display_count = len(log_hooks) - fig = plt.figure(figsize=(8, 4 * display_count)) - gs = fig.add_gridspec(display_count, 3) - for i in range(display_count): - hooks_dict = log_hooks[i] - fig.add_subplot(gs[i, 0]) - if 'diff_input' in hooks_dict: - vis_faces_with_id(hooks_dict, fig, gs, i) - else: - vis_faces_no_id(hooks_dict, fig, gs, i) - plt.tight_layout() - return fig - - -def vis_faces_with_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face']) - plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input']))) - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']), - float(hooks_dict['diff_target']))) - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target']))) - - -def vis_faces_no_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face'], cmap="gray") - plt.title('Input') - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target') - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output') diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/segmentors/base.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/segmentors/base.py deleted file mode 100644 index 172fc63b736c4f13be1cd909433bc260760a1eaa..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/segmentors/base.py +++ /dev/null @@ -1,273 +0,0 @@ -import logging -import warnings -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch -import torch.distributed as dist -import torch.nn as nn -from annotator.uniformer.mmcv.runner import auto_fp16 - - -class BaseSegmentor(nn.Module): - """Base class for segmentors.""" - - __metaclass__ = ABCMeta - - def __init__(self): - super(BaseSegmentor, self).__init__() - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the segmentor has neck""" - return hasattr(self, 'neck') and self.neck is not None - - @property - def with_auxiliary_head(self): - """bool: whether the segmentor has auxiliary head""" - return hasattr(self, - 'auxiliary_head') and self.auxiliary_head is not None - - @property - def with_decode_head(self): - """bool: whether the segmentor has decode head""" - return hasattr(self, 'decode_head') and self.decode_head is not None - - @abstractmethod - def extract_feat(self, imgs): - """Placeholder for extract features from images.""" - pass - - @abstractmethod - def encode_decode(self, img, img_metas): - """Placeholder for encode images with backbone and decode into a - semantic segmentation map of the same size as input.""" - pass - - @abstractmethod - def forward_train(self, imgs, img_metas, **kwargs): - """Placeholder for Forward function for training.""" - pass - - @abstractmethod - def simple_test(self, img, img_meta, **kwargs): - """Placeholder for single image test.""" - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Placeholder for augmentation test.""" - pass - - def init_weights(self, pretrained=None): - """Initialize the weights in segmentor. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if pretrained is not None: - logger = logging.getLogger() - logger.info(f'load model from: {pretrained}') - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got ' - f'{type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) != ' - f'num of image meta ({len(img_metas)})') - # all images in the same aug batch all of the same ori_shape and pad - # shape - for img_meta in img_metas: - ori_shapes = [_['ori_shape'] for _ in img_meta] - assert all(shape == ori_shapes[0] for shape in ori_shapes) - img_shapes = [_['img_shape'] for _ in img_meta] - assert all(shape == img_shapes[0] for shape in img_shapes) - pad_shapes = [_['pad_shape'] for _ in img_meta] - assert all(shape == pad_shapes[0] for shape in pad_shapes) - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def train_step(self, data_batch, optimizer, **kwargs): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, - ``num_samples``. - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - ``log_vars`` contains all the variables to be sent to the - logger. - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data_batch) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, - log_vars=log_vars, - num_samples=len(data_batch['img_metas'])) - - return outputs - - def val_step(self, data_batch, **kwargs): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - output = self(**data_batch, **kwargs) - return output - - @staticmethod - def _parse_losses(losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor - which may be a weighted sum of all losses, log_vars contains - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def show_result(self, - img, - result, - palette=None, - win_name='', - show=False, - wait_time=0, - out_file=None, - opacity=0.5): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor): The semantic segmentation results to draw over - `img`. - palette (list[list[int]]] | np.ndarray | None): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - win_name (str): The window name. - wait_time (int): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - seg = result[0] - if palette is None: - if self.PALETTE is None: - palette = np.random.randint( - 0, 255, size=(len(self.CLASSES), 3)) - else: - palette = self.PALETTE - palette = np.array(palette) - assert palette.shape[0] == len(self.CLASSES) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - assert 0 < opacity <= 1.0 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - color_seg = color_seg[..., ::-1] - - img = img * (1 - opacity) + color_seg * opacity - img = img.astype(np.uint8) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - - if show: - mmcv.imshow(img, win_name, wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - if not (show or out_file): - warnings.warn('show==False and out_file is not specified, only ' - 'result image will be returned') - return img diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/epoch_based_runner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/epoch_based_runner.py deleted file mode 100644 index 766a9ce6afdf09cd11b1b15005f5132583011348..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/epoch_based_runner.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch - -import annotator.uniformer.mmcv as mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .utils import get_host_info - - -@RUNNERS.register_module() -class EpochBasedRunner(BaseRunner): - """Epoch-based Runner. - - This runner train models epoch by epoch. - """ - - def run_iter(self, data_batch, train_mode, **kwargs): - if self.batch_processor is not None: - outputs = self.batch_processor( - self.model, data_batch, train_mode=train_mode, **kwargs) - elif train_mode: - outputs = self.model.train_step(data_batch, self.optimizer, - **kwargs) - else: - outputs = self.model.val_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('"batch_processor()" or "model.train_step()"' - 'and "model.val_step()" must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._max_iters = self._max_epochs * len(self.data_loader) - self.call_hook('before_train_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_train_iter') - self.run_iter(data_batch, train_mode=True, **kwargs) - self.call_hook('after_train_iter') - self._iter += 1 - - self.call_hook('after_train_epoch') - self._epoch += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - self.call_hook('before_val_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_val_iter') - self.run_iter(data_batch, train_mode=False) - self.call_hook('after_val_iter') - - self.call_hook('after_val_epoch') - - def run(self, data_loaders, workflow, max_epochs=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, epochs) to specify the - running order and epochs. E.g, [('train', 2), ('val', 1)] means - running 2 epochs for training and 1 epoch for validation, - iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_epochs is not None: - warnings.warn( - 'setting max_epochs in run is deprecated, ' - 'please set max_epochs in runner_config', DeprecationWarning) - self._max_epochs = max_epochs - - assert self._max_epochs is not None, ( - 'max_epochs must be specified during instantiation') - - for i, flow in enumerate(workflow): - mode, epochs = flow - if mode == 'train': - self._max_iters = self._max_epochs * len(data_loaders[i]) - break - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d epochs', workflow, - self._max_epochs) - self.call_hook('before_run') - - while self.epoch < self._max_epochs: - for i, flow in enumerate(workflow): - mode, epochs = flow - if isinstance(mode, str): # self.train() - if not hasattr(self, mode): - raise ValueError( - f'runner has no method named "{mode}" to run an ' - 'epoch') - epoch_runner = getattr(self, mode) - else: - raise TypeError( - 'mode in workflow must be a str, but got {}'.format( - type(mode))) - - for _ in range(epochs): - if mode == 'train' and self.epoch >= self._max_epochs: - break - epoch_runner(data_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_run') - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - -@RUNNERS.register_module() -class Runner(EpochBasedRunner): - """Deprecated name of EpochBasedRunner.""" - - def __init__(self, *args, **kwargs): - warnings.warn( - 'Runner was deprecated, please use EpochBasedRunner instead') - super().__init__(*args, **kwargs) diff --git a/spaces/SceneDiffuser/SceneDiffuserDemo/style.css b/spaces/SceneDiffuser/SceneDiffuserDemo/style.css deleted file mode 100644 index 968478b264141f13347d97d6fd1f54f6d124cd05..0000000000000000000000000000000000000000 --- a/spaces/SceneDiffuser/SceneDiffuserDemo/style.css +++ /dev/null @@ -1 +0,0 @@ -#col-container {max-width: 1000px; margin-left: auto; margin-right: auto;} \ No newline at end of file diff --git a/spaces/Shredder/CONBERT/predict.py b/spaces/Shredder/CONBERT/predict.py deleted file mode 100644 index 8cbcb13a58a7515d7b33e1bc30be53ff92ec5acd..0000000000000000000000000000000000000000 --- a/spaces/Shredder/CONBERT/predict.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import time -from torch.utils.data import DataLoader, RandomSampler, SequentialSampler -from multiprocessing import cpu_count - -from transformers import ( - AutoConfig, - AutoModelForQuestionAnswering, - AutoTokenizer, - squad_convert_examples_to_features -) - -from transformers.data.processors.squad import SquadResult, SquadV2Processor, SquadExample -from transformers.data.metrics.squad_metrics import compute_predictions_logits - - -def run_prediction(question_texts, context_text, model_path, n_best_size=1): - max_seq_length = 512 - doc_stride = 256 - n_best_size = n_best_size - max_query_length = 64 - max_answer_length = 512 - do_lower_case = False - null_score_diff_threshold = 0.0 - - def to_list(tensor): - return tensor.detach().cpu().tolist() - - config_class, model_class, tokenizer_class = (AutoConfig, AutoModelForQuestionAnswering, AutoTokenizer) - config = config_class.from_pretrained(model_path) - tokenizer = tokenizer_class.from_pretrained(model_path, do_lower_case=True, use_fast=False) - model = model_class.from_pretrained(model_path, config=config) - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model.to(device) - - processor = SquadV2Processor() - examples = [] - - timer = time.time() - for i, question_text in enumerate(question_texts): - - example = SquadExample( - qas_id=str(i), - question_text=question_text, - context_text=context_text, - answer_text=None, - start_position_character=None, - title="Predict", - answers=None, - ) - - examples.append(example) - print(f'Created Squad Examples in {time.time()-timer} seconds') - - print(f'Number of CPUs: {cpu_count()}') - timer = time.time() - features, dataset = squad_convert_examples_to_features( - examples=examples, - tokenizer=tokenizer, - max_seq_length=max_seq_length, - doc_stride=doc_stride, - max_query_length=max_query_length, - is_training=False, - return_dataset="pt", - threads=cpu_count(), - ) - print(f'Converted Examples to Features in {time.time()-timer} seconds') - - eval_sampler = SequentialSampler(dataset) - eval_dataloader = DataLoader(dataset, sampler=eval_sampler, batch_size=10) - - all_results = [] - - timer = time.time() - for batch in eval_dataloader: - model.eval() - batch = tuple(t.to(device) for t in batch) - - with torch.no_grad(): - inputs = { - "input_ids": batch[0], - "attention_mask": batch[1], - "token_type_ids": batch[2], - } - - example_indices = batch[3] - - outputs = model(**inputs) - - for i, example_index in enumerate(example_indices): - eval_feature = features[example_index.item()] - unique_id = int(eval_feature.unique_id) - - output = [to_list(output[i]) for output in outputs.to_tuple()] - - start_logits, end_logits = output - result = SquadResult(unique_id, start_logits, end_logits) - all_results.append(result) - print(f'Model predictions completed in {time.time()-timer} seconds') - - print(all_results) - - output_nbest_file = None - if n_best_size > 1: - output_nbest_file = "nbest.json" - - timer = time.time() - final_predictions = compute_predictions_logits( - all_examples=examples, - all_features=features, - all_results=all_results, - n_best_size=n_best_size, - max_answer_length=max_answer_length, - do_lower_case=do_lower_case, - output_prediction_file=None, - output_nbest_file=output_nbest_file, - output_null_log_odds_file=None, - verbose_logging=False, - version_2_with_negative=True, - null_score_diff_threshold=null_score_diff_threshold, - tokenizer=tokenizer - ) - print(f'Logits converted to predictions in {time.time()-timer} seconds') - - return final_predictions diff --git a/spaces/StaticalizaAI/GPT-4/README.md b/spaces/StaticalizaAI/GPT-4/README.md deleted file mode 100644 index 3c62deaf29fb8b2c0e05938fa9edc20160a92f57..0000000000000000000000000000000000000000 --- a/spaces/StaticalizaAI/GPT-4/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: GPT-4 (real) -emoji: ✨ -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: openrail ---- \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/utils/test_messagid.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/utils/test_messagid.py deleted file mode 100644 index eff20a1b6fedaf460535f3dc7c4f64a70a27f8f1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/utils/test_messagid.py +++ /dev/null @@ -1,93 +0,0 @@ -import chromadb.utils.messageid as mid -import pulsar -import hypothesis.strategies as st -from hypothesis import given, settings, note -from typing import Any, Tuple - - -@st.composite -def message_id(draw: st.DrawFn) -> pulsar.MessageId: - ledger_id = draw(st.integers(min_value=0, max_value=2**63 - 1)) - entry_id = draw(st.integers(min_value=0, max_value=2**63 - 1)) - batch_index = draw(st.integers(min_value=(2**31 - 1) * -1, max_value=2**31 - 1)) - partition = draw(st.integers(min_value=(2**31 - 1) * -1, max_value=2**31 - 1)) - return pulsar.MessageId(partition, ledger_id, entry_id, batch_index) - - -@given(message_id=message_id()) -@settings(max_examples=10000) # these are very fast and we want good coverage -def test_roundtrip_formats(message_id: pulsar.MessageId) -> None: - int1 = mid.pulsar_to_int(message_id) - - # Roundtrip int->string and back - str1 = mid.int_to_str(int1) - assert int1 == mid.str_to_int(str1) - - # Roundtrip int->bytes and back - b1 = mid.int_to_bytes(int1) - assert int1 == mid.bytes_to_int(b1) - - # Roundtrip int -> MessageId and back - message_id_result = mid.int_to_pulsar(int1) - assert message_id_result.partition() == message_id.partition() - assert message_id_result.ledger_id() == message_id.ledger_id() - assert message_id_result.entry_id() == message_id.entry_id() - assert message_id_result.batch_index() == message_id.batch_index() - - -def assert_compare(pair1: Tuple[Any, Any], pair2: Tuple[Any, Any]) -> None: - """Helper function: assert that the two pairs of values always compare in the same - way across all comparisons and orderings.""" - - a, b = pair1 - c, d = pair2 - - try: - assert (a > b) == (c > d) - assert (a >= b) == (c >= d) - assert (a < b) == (c < d) - assert (a <= b) == (c <= d) - assert (a == b) == (c == d) - except AssertionError: - note(f"Failed to compare {a} and {b} with {c} and {d}") - note(f"type: {type(a)}") - raise - - -@given(m1=message_id(), m2=message_id()) -@settings(max_examples=10000) # these are very fast and we want good coverage -def test_messageid_comparison(m1: pulsar.MessageId, m2: pulsar.MessageId) -> None: - # MessageID comparison is broken in the Pulsar Python & CPP libraries: - # The partition field is not taken into account, and two MessageIDs with different - # partitions will compare inconsistently (m1 > m2 AND m2 > m1) - # To avoid this, we zero-out the partition field before testing. - m1 = pulsar.MessageId(0, m1.ledger_id(), m1.entry_id(), m1.batch_index()) - m2 = pulsar.MessageId(0, m2.ledger_id(), m2.entry_id(), m2.batch_index()) - - i1 = mid.pulsar_to_int(m1) - i2 = mid.pulsar_to_int(m2) - - # In python, MessageId objects are not comparable directory, but the - # internal generated native object is. - internal1 = m1._msg_id - internal2 = m2._msg_id - - s1 = mid.int_to_str(i1) - s2 = mid.int_to_str(i2) - - # assert that all strings, all ints, and all native objects compare the same - assert_compare((internal1, internal2), (i1, i2)) - assert_compare((internal1, internal2), (s1, s2)) - - -def test_max_values() -> None: - pulsar.MessageId(2**31 - 1, 2**63 - 1, 2**63 - 1, 2**31 - 1) - - -@given( - i1=st.integers(min_value=0, max_value=2**192 - 1), - i2=st.integers(min_value=0, max_value=2**192 - 1), -) -@settings(max_examples=10000) # these are very fast and we want good coverage -def test_string_comparison(i1: int, i2: int) -> None: - assert_compare((i1, i2), (mid.int_to_str(i1), mid.int_to_str(i2))) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/mmdet_wrapper.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/mmdet_wrapper.py deleted file mode 100644 index 5a60958cdc07e0170e4dfe02684bce259d42bdbc..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/mmdet_wrapper.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -import numpy as np -from collections import OrderedDict -from collections.abc import Mapping -from typing import Dict, List, Optional, Tuple, Union -import torch -from omegaconf import DictConfig, OmegaConf -from torch import Tensor, nn - -from annotator.oneformer.detectron2.layers import ShapeSpec -from annotator.oneformer.detectron2.structures import BitMasks, Boxes, ImageList, Instances -from annotator.oneformer.detectron2.utils.events import get_event_storage - -from .backbone import Backbone - -logger = logging.getLogger(__name__) - - -def _to_container(cfg): - """ - mmdet will assert the type of dict/list. - So convert omegaconf objects to dict/list. - """ - if isinstance(cfg, DictConfig): - cfg = OmegaConf.to_container(cfg, resolve=True) - from mmcv.utils import ConfigDict - - return ConfigDict(cfg) - - -class MMDetBackbone(Backbone): - """ - Wrapper of mmdetection backbones to use in detectron2. - - mmdet backbones produce list/tuple of tensors, while detectron2 backbones - produce a dict of tensors. This class wraps the given backbone to produce - output in detectron2's convention, so it can be used in place of detectron2 - backbones. - """ - - def __init__( - self, - backbone: Union[nn.Module, Mapping], - neck: Union[nn.Module, Mapping, None] = None, - *, - output_shapes: List[ShapeSpec], - output_names: Optional[List[str]] = None, - ): - """ - Args: - backbone: either a backbone module or a mmdet config dict that defines a - backbone. The backbone takes a 4D image tensor and returns a - sequence of tensors. - neck: either a backbone module or a mmdet config dict that defines a - neck. The neck takes outputs of backbone and returns a - sequence of tensors. If None, no neck is used. - output_shapes: shape for every output of the backbone (or neck, if given). - stride and channels are often needed. - output_names: names for every output of the backbone (or neck, if given). - By default, will use "out0", "out1", ... - """ - super().__init__() - if isinstance(backbone, Mapping): - from mmdet.models import build_backbone - - backbone = build_backbone(_to_container(backbone)) - self.backbone = backbone - - if isinstance(neck, Mapping): - from mmdet.models import build_neck - - neck = build_neck(_to_container(neck)) - self.neck = neck - - # "Neck" weights, if any, are part of neck itself. This is the interface - # of mmdet so we follow it. Reference: - # https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/two_stage.py - logger.info("Initializing mmdet backbone weights...") - self.backbone.init_weights() - # train() in mmdet modules is non-trivial, and has to be explicitly - # called. Reference: - # https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/backbones/resnet.py - self.backbone.train() - if self.neck is not None: - logger.info("Initializing mmdet neck weights ...") - if isinstance(self.neck, nn.Sequential): - for m in self.neck: - m.init_weights() - else: - self.neck.init_weights() - self.neck.train() - - self._output_shapes = output_shapes - if not output_names: - output_names = [f"out{i}" for i in range(len(output_shapes))] - self._output_names = output_names - - def forward(self, x) -> Dict[str, Tensor]: - outs = self.backbone(x) - if self.neck is not None: - outs = self.neck(outs) - assert isinstance( - outs, (list, tuple) - ), "mmdet backbone should return a list/tuple of tensors!" - if len(outs) != len(self._output_shapes): - raise ValueError( - "Length of output_shapes does not match outputs from the mmdet backbone: " - f"{len(outs)} != {len(self._output_shapes)}" - ) - return {k: v for k, v in zip(self._output_names, outs)} - - def output_shape(self) -> Dict[str, ShapeSpec]: - return {k: v for k, v in zip(self._output_names, self._output_shapes)} - - -class MMDetDetector(nn.Module): - """ - Wrapper of a mmdetection detector model, for detection and instance segmentation. - Input/output formats of this class follow detectron2's convention, so a - mmdetection model can be trained and evaluated in detectron2. - """ - - def __init__( - self, - detector: Union[nn.Module, Mapping], - *, - # Default is 32 regardless of model: - # https://github.com/open-mmlab/mmdetection/tree/master/configs/_base_/datasets - size_divisibility=32, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - ): - """ - Args: - detector: a mmdet detector, or a mmdet config dict that defines a detector. - size_divisibility: pad input images to multiple of this number - pixel_mean: per-channel mean to normalize input image - pixel_std: per-channel stddev to normalize input image - """ - super().__init__() - if isinstance(detector, Mapping): - from mmdet.models import build_detector - - detector = build_detector(_to_container(detector)) - self.detector = detector - self.detector.init_weights() - self.size_divisibility = size_divisibility - - self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) - assert ( - self.pixel_mean.shape == self.pixel_std.shape - ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!" - - def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, size_divisibility=self.size_divisibility).tensor - metas = [] - rescale = {"height" in x for x in batched_inputs} - if len(rescale) != 1: - raise ValueError("Some inputs have original height/width, but some don't!") - rescale = list(rescale)[0] - output_shapes = [] - for input in batched_inputs: - meta = {} - c, h, w = input["image"].shape - meta["img_shape"] = meta["ori_shape"] = (h, w, c) - if rescale: - scale_factor = np.array( - [w / input["width"], h / input["height"]] * 2, dtype="float32" - ) - ori_shape = (input["height"], input["width"]) - output_shapes.append(ori_shape) - meta["ori_shape"] = ori_shape + (c,) - else: - scale_factor = 1.0 - output_shapes.append((h, w)) - meta["scale_factor"] = scale_factor - meta["flip"] = False - padh, padw = images.shape[-2:] - meta["pad_shape"] = (padh, padw, c) - metas.append(meta) - - if self.training: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - if gt_instances[0].has("gt_masks"): - from mmdet.core import PolygonMasks as mm_PolygonMasks, BitmapMasks as mm_BitMasks - - def convert_mask(m, shape): - # mmdet mask format - if isinstance(m, BitMasks): - return mm_BitMasks(m.tensor.cpu().numpy(), shape[0], shape[1]) - else: - return mm_PolygonMasks(m.polygons, shape[0], shape[1]) - - gt_masks = [convert_mask(x.gt_masks, x.image_size) for x in gt_instances] - losses_and_metrics = self.detector.forward_train( - images, - metas, - [x.gt_boxes.tensor for x in gt_instances], - [x.gt_classes for x in gt_instances], - gt_masks=gt_masks, - ) - else: - losses_and_metrics = self.detector.forward_train( - images, - metas, - [x.gt_boxes.tensor for x in gt_instances], - [x.gt_classes for x in gt_instances], - ) - return _parse_losses(losses_and_metrics) - else: - results = self.detector.simple_test(images, metas, rescale=rescale) - results = [ - {"instances": _convert_mmdet_result(r, shape)} - for r, shape in zip(results, output_shapes) - ] - return results - - @property - def device(self): - return self.pixel_mean.device - - -# Reference: show_result() in -# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/base.py -def _convert_mmdet_result(result, shape: Tuple[int, int]) -> Instances: - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] - else: - bbox_result, segm_result = result, None - - bboxes = torch.from_numpy(np.vstack(bbox_result)) # Nx5 - bboxes, scores = bboxes[:, :4], bboxes[:, -1] - labels = [ - torch.full((bbox.shape[0],), i, dtype=torch.int32) for i, bbox in enumerate(bbox_result) - ] - labels = torch.cat(labels) - inst = Instances(shape) - inst.pred_boxes = Boxes(bboxes) - inst.scores = scores - inst.pred_classes = labels - - if segm_result is not None and len(labels) > 0: - segm_result = list(itertools.chain(*segm_result)) - segm_result = [torch.from_numpy(x) if isinstance(x, np.ndarray) else x for x in segm_result] - segm_result = torch.stack(segm_result, dim=0) - inst.pred_masks = segm_result - return inst - - -# reference: https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/base.py -def _parse_losses(losses: Dict[str, Tensor]) -> Dict[str, Tensor]: - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError(f"{loss_name} is not a tensor or list of tensors") - - if "loss" not in loss_name: - # put metrics to storage; don't return them - storage = get_event_storage() - value = log_vars.pop(loss_name).cpu().item() - storage.put_scalar(loss_name, value) - return log_vars diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/grouped_batch_sampler.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/grouped_batch_sampler.py deleted file mode 100644 index 5b247730aacd04dd0c752664acde3257c4eddd71..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/grouped_batch_sampler.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from torch.utils.data.sampler import BatchSampler, Sampler - - -class GroupedBatchSampler(BatchSampler): - """ - Wraps another sampler to yield a mini-batch of indices. - It enforces that the batch only contain elements from the same group. - It also tries to provide mini-batches which follows an ordering which is - as close as possible to the ordering from the original sampler. - """ - - def __init__(self, sampler, group_ids, batch_size): - """ - Args: - sampler (Sampler): Base sampler. - group_ids (list[int]): If the sampler produces indices in range [0, N), - `group_ids` must be a list of `N` ints which contains the group id of each sample. - The group ids must be a set of integers in the range [0, num_groups). - batch_size (int): Size of mini-batch. - """ - if not isinstance(sampler, Sampler): - raise ValueError( - "sampler should be an instance of " - "torch.utils.data.Sampler, but got sampler={}".format(sampler) - ) - self.sampler = sampler - self.group_ids = np.asarray(group_ids) - assert self.group_ids.ndim == 1 - self.batch_size = batch_size - groups = np.unique(self.group_ids).tolist() - - # buffer the indices of each group until batch size is reached - self.buffer_per_group = {k: [] for k in groups} - - def __iter__(self): - for idx in self.sampler: - group_id = self.group_ids[idx] - group_buffer = self.buffer_per_group[group_id] - group_buffer.append(idx) - if len(group_buffer) == self.batch_size: - yield group_buffer[:] # yield a copy of the list - del group_buffer[:] - - def __len__(self): - raise NotImplementedError("len() of GroupedBatchSampler is not well-defined.") diff --git a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/models_dml.py b/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/models_dml.py deleted file mode 100644 index 958d7b29259763d2fea94caf8ba7e314c4a77d05..0000000000000000000000000000000000000000 --- a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/models_dml.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv.float() - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/UjjwalVIT/Text_analysis_and_metadata_app/stanfordmodel/ner.bat b/spaces/UjjwalVIT/Text_analysis_and_metadata_app/stanfordmodel/ner.bat deleted file mode 100644 index bb1fd6a18ded8f396dbc82d7921cc9b744baf9fa..0000000000000000000000000000000000000000 --- a/spaces/UjjwalVIT/Text_analysis_and_metadata_app/stanfordmodel/ner.bat +++ /dev/null @@ -1 +0,0 @@ -java -mx1000m -cp stanford-ner.jar;lib/* edu.stanford.nlp.ie.crf.CRFClassifier -loadClassifier classifiers\english.all.3class.distsim.crf.ser.gz -textFile %1 diff --git a/spaces/Vageesh1/PDF_QA/helper.py b/spaces/Vageesh1/PDF_QA/helper.py deleted file mode 100644 index fd6e33e1c5c8b397038c156ce8dac349f2b42558..0000000000000000000000000000000000000000 --- a/spaces/Vageesh1/PDF_QA/helper.py +++ /dev/null @@ -1,78 +0,0 @@ -import tempfile -import streamlit as st -from streamlit_chat import message - -import torch -import torch.nn - -import transformers -from transformers import ( - AutoModelForCausalLM, - AutoTokenizer, - BitsAndBytesConfig, - HfArgumentParser, - TrainingArguments, - pipeline, - logging, -) - - -import pandas as pd -import numpy as np -import os -import io - -from langchain.document_loaders import TextLoader -from langchain import PromptTemplate -from langchain.text_splitter import CharacterTextSplitter -from langchain.document_loaders import PyPDFLoader -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import FAISS -from langchain.chains.question_answering import load_qa_chain -from langchain.chains import RetrievalQA -from langchain import HuggingFacePipeline - - -def pdf_loader(file_path): - '''This is a function for loading the PDFs - Params: - file_path: The path of the PDF file - ''' - output_file = "Loaded_PDF.txt" - loader = PyPDFLoader(file_path) - pdf_file_as_loaded_docs = loader.load() - return pdf_file_as_loaded_docs - -def splitDoc(loaded_docs): - '''This is a function that creates the chunks of our loaded Document - Params: - loaded_docs:The loaded document from the pdf_loader function''' - splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=10) - chunked_docs = splitter.split_documents(loaded_docs) - return chunked_docs - -def makeEmbeddings(chunked_docs): - '''This is a functuon for making the embeddings of the chunked document - Params: - chunked_docs:The chunked docs''' - embedder = HuggingFaceEmbeddings() - vector_store = FAISS.from_documents(chunked_docs, embedder)#making a FAISS based vector data - return vector_store - - -def create_flan_t5_base(load_in_8bit=False): - ''''Loading the Flan T5 base in the form of pipeline''' - # Wrap it in HF pipeline for use with LangChain - model="google/flan-t5-base" - tokenizer = AutoTokenizer.from_pretrained(model) - return pipeline( - task="text2text-generation", - model=model, - tokenizer = tokenizer, - max_new_tokens=100, - model_kwargs={ "load_in_8bit": load_in_8bit, "max_length": 512, "temperature": 0.} - ) - - - - diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Zeabur.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Zeabur.py deleted file mode 100644 index e412720bd9a0c88860f6ea8a657cb0a24bcce63f..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Zeabur.py +++ /dev/null @@ -1,50 +0,0 @@ -import os -import requests -from ...typing import sha256, Dict, get_type_hints - -url = "https://gptleg.zeabur.app" -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-0301', - 'gpt-3.5-turbo-16k', 'gpt-4', 'gpt-4-0613'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - headers = { - 'Authority': 'chat.dfehub.com', - 'Content-Type': 'application/json', - 'Method': 'POST', - 'Path': '/api/openai/v1/chat/completions', - 'Scheme': 'https', - 'Accept': 'text/event-stream', - 'Accept-Language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6,zh;q=0.5', - 'Content-Type': 'application/json', - 'Origin': 'https://gptleg.zeabur.app', - 'Referer': 'https://gptleg.zeabur.app/', - 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'Sec-Ch-Ua-Mobile': '?0', - 'Sec-Ch-Ua-Platform': '"Windows"', - 'Sec-Fetch-Dest': 'empty', - 'Sec-Fetch-Mode': 'cors', - 'Sec-Fetch-Site': 'same-origin', - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - 'X-Requested-With': 'XMLHttpRequest', - } - - data = { - 'model': model, - 'temperature': 0.7, - 'max_tokens': '16000', - 'presence_penalty': 0, - 'messages': messages, - } - - response = requests.post(url + '/api/openai/v1/chat/completions', - headers=headers, json=data, stream=stream) - - yield response.json()['choices'][0]['message']['content'] - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/helpers/gpt4love.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/helpers/gpt4love.py deleted file mode 100644 index 987fdbf8de5c27f7b827183d9c192dcf48d8ddcf..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/helpers/gpt4love.py +++ /dev/null @@ -1,48 +0,0 @@ -import json -import sys -from re import findall -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -headers = { - 'authority': 'api.gptplus.one', - 'accept': 'application/json, text/plain, */*', - 'accept-language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - 'content-type': 'application/octet-stream', - 'origin': 'https://ai.gptforlove.com/', - 'referer': 'https://ai.gptforlove.com/', - 'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'cross-site', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36', -} - -json_data = { - 'prompt': prompt, - 'options': {} -} - -def format(chunk): - try: - completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0] - print(completion_chunk, flush=True, end='') - - except Exception as e: - print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True) - return - -while True: - try: - response = requests.post('https://api.gptplus.one/api/chat-process', - headers=headers, json=json_data, content_callback=format, impersonate='chrome110') - - exit(0) - - except Exception as e: - print('[ERROR] an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/VishyVish/Face-ID-duplicated/app.py b/spaces/VishyVish/Face-ID-duplicated/app.py deleted file mode 100644 index 963fe62900d5bfe36216fa941e9d701a6f284d05..0000000000000000000000000000000000000000 --- a/spaces/VishyVish/Face-ID-duplicated/app.py +++ /dev/null @@ -1,31 +0,0 @@ -from sklearn.metrics.pairwise import cosine_similarity -from sentence_transformers import SentenceTransformer -import datasets -import gradio as gr - -#model = SentenceTransformer('clip-ViT-B-16') -model = SentenceTransformer('clip-ViT-B-32') -dataset = datasets.load_dataset('brendenc/celeb-identities') - -def predict(im1, im2): - - embeddings = model.encode([im1, im2]) - sim = cosine_similarity(embeddings) - sim = sim[0, 1] - if sim > 0.82: - return sim, "SAME PERSON, AUTHORIZE PAYMENT" - else: - return sim, "DIFFERENT PEOPLE, DON'T AUTHORIZE PAYMENT" - - -interface = gr.Interface(fn=predict, - inputs= [gr.Image(value = dataset['train']['image'][10], type="pil", source="webcam"), - gr.Image(value = dataset['train']['image'][17], type="pil", source="webcam")], - outputs= [gr.Number(label="Similarity"), - gr.Textbox(label="Message")], - title = 'Face ID', - description = 'This app uses face biometrics and a similarity to function as a Face ID application.The similarity score ranges from -1 to 1.' - ) - -interface.launch(debug=True) -#interface.launch(share=True) diff --git a/spaces/VoiceHero69/changer/webui/ui/tabs/__init__.py b/spaces/VoiceHero69/changer/webui/ui/tabs/__init__.py deleted file mode 100644 index 24b63c0aa682b34839fae7bdda39c86456517683..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/webui/ui/tabs/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .settings import extra_tab -from .rvc import rvc - diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/misc.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/misc.py deleted file mode 100644 index d64b84ef24bea0c98e76824feb1903f6bfebe7a5..0000000000000000000000000000000000000000 --- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/misc.py +++ /dev/null @@ -1,717 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Misc functions, including distributed helpers. - -Mostly copy-paste from torchvision references. -""" -import colorsys -import datetime -import functools -import io -import json -import os -import pickle -import subprocess -import time -from collections import OrderedDict, defaultdict, deque -from typing import List, Optional - -import numpy as np -import torch -import torch.distributed as dist - -# needed due to empty tensor bug in pytorch and torchvision 0.5 -import torchvision -from torch import Tensor - -__torchvision_need_compat_flag = float(torchvision.__version__.split(".")[1]) < 7 -if __torchvision_need_compat_flag: - from torchvision.ops import _new_empty_tensor - from torchvision.ops.misc import _output_size - - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda") - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - if d.shape[0] == 0: - return 0 - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - if os.environ.get("SHILONG_AMP", None) == "1": - eps = 1e-4 - else: - eps = 1e-6 - return self.total / (self.count + eps) - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value, - ) - - -@functools.lru_cache() -def _get_global_gloo_group(): - """ - Return a process group based on gloo backend, containing all the ranks - The result is cached. - """ - - if dist.get_backend() == "nccl": - return dist.new_group(backend="gloo") - - return dist.group.WORLD - - -def all_gather_cpu(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - - world_size = get_world_size() - if world_size == 1: - return [data] - - cpu_group = _get_global_gloo_group() - - buffer = io.BytesIO() - torch.save(data, buffer) - data_view = buffer.getbuffer() - device = "cuda" if cpu_group is None else "cpu" - tensor = torch.ByteTensor(data_view).to(device) - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device=device, dtype=torch.long) - size_list = [torch.tensor([0], device=device, dtype=torch.long) for _ in range(world_size)] - if cpu_group is None: - dist.all_gather(size_list, local_size) - else: - print("gathering on cpu") - dist.all_gather(size_list, local_size, group=cpu_group) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - assert isinstance(local_size.item(), int) - local_size = int(local_size.item()) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=device)) - if local_size != max_size: - padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=device) - tensor = torch.cat((tensor, padding), dim=0) - if cpu_group is None: - dist.all_gather(tensor_list, tensor) - else: - dist.all_gather(tensor_list, tensor, group=cpu_group) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - tensor = torch.split(tensor, [size, max_size - size], dim=0)[0] - buffer = io.BytesIO(tensor.cpu().numpy()) - obj = torch.load(buffer) - data_list.append(obj) - - return data_list - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - - if os.getenv("CPU_REDUCE") == "1": - return all_gather_cpu(data) - - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device="cuda") - size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda")) - if local_size != max_size: - padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda") - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that all processes - have the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.all_reduce(values) - if average: - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError("'{}' object has no attribute '{}'".format(type(self).__name__, attr)) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - # print(name, str(meter)) - # import ipdb;ipdb.set_trace() - if meter.count > 0: - loss_str.append("{}: {}".format(name, str(meter))) - return self.delimiter.join(loss_str) - - def synchronize_between_processes(self): - for meter in self.meters.values(): - meter.synchronize_between_processes() - - def add_meter(self, name, meter): - self.meters[name] = meter - - def log_every(self, iterable, print_freq, header=None, logger=None): - if logger is None: - print_func = print - else: - print_func = logger.info - - i = 0 - if not header: - header = "" - start_time = time.time() - end = time.time() - iter_time = SmoothedValue(fmt="{avg:.4f}") - data_time = SmoothedValue(fmt="{avg:.4f}") - space_fmt = ":" + str(len(str(len(iterable)))) + "d" - if torch.cuda.is_available(): - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - "max mem: {memory:.0f}", - ] - ) - else: - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - ] - ) - MB = 1024.0 * 1024.0 - for obj in iterable: - data_time.update(time.time() - end) - yield obj - # import ipdb; ipdb.set_trace() - iter_time.update(time.time() - end) - if i % print_freq == 0 or i == len(iterable) - 1: - eta_seconds = iter_time.global_avg * (len(iterable) - i) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - if torch.cuda.is_available(): - print_func( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - memory=torch.cuda.max_memory_allocated() / MB, - ) - ) - else: - print_func( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - ) - ) - i += 1 - end = time.time() - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print_func( - "{} Total time: {} ({:.4f} s / it)".format( - header, total_time_str, total_time / len(iterable) - ) - ) - - -def get_sha(): - cwd = os.path.dirname(os.path.abspath(__file__)) - - def _run(command): - return subprocess.check_output(command, cwd=cwd).decode("ascii").strip() - - sha = "N/A" - diff = "clean" - branch = "N/A" - try: - sha = _run(["git", "rev-parse", "HEAD"]) - subprocess.check_output(["git", "diff"], cwd=cwd) - diff = _run(["git", "diff-index", "HEAD"]) - diff = "has uncommited changes" if diff else "clean" - branch = _run(["git", "rev-parse", "--abbrev-ref", "HEAD"]) - except Exception: - pass - message = f"sha: {sha}, status: {diff}, branch: {branch}" - return message - - -def collate_fn(batch): - # import ipdb; ipdb.set_trace() - batch = list(zip(*batch)) - batch[0] = nested_tensor_from_tensor_list(batch[0]) - return tuple(batch) - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - if mask == "auto": - self.mask = torch.zeros_like(tensors).to(tensors.device) - if self.mask.dim() == 3: - self.mask = self.mask.sum(0).to(bool) - elif self.mask.dim() == 4: - self.mask = self.mask.sum(1).to(bool) - else: - raise ValueError( - "tensors dim must be 3 or 4 but {}({})".format( - self.tensors.dim(), self.tensors.shape - ) - ) - - def imgsize(self): - res = [] - for i in range(self.tensors.shape[0]): - mask = self.mask[i] - maxH = (~mask).sum(0).max() - maxW = (~mask).sum(1).max() - res.append(torch.Tensor([maxH, maxW])) - return res - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def to_img_list_single(self, tensor, mask): - assert tensor.dim() == 3, "dim of tensor should be 3 but {}".format(tensor.dim()) - maxH = (~mask).sum(0).max() - maxW = (~mask).sum(1).max() - img = tensor[:, :maxH, :maxW] - return img - - def to_img_list(self): - """remove the padding and convert to img list - - Returns: - [type]: [description] - """ - if self.tensors.dim() == 3: - return self.to_img_list_single(self.tensors, self.mask) - else: - res = [] - for i in range(self.tensors.shape[0]): - tensor_i = self.tensors[i] - mask_i = self.mask[i] - res.append(self.to_img_list_single(tensor_i, mask_i)) - return res - - @property - def device(self): - return self.tensors.device - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - @property - def shape(self): - return {"tensors.shape": self.tensors.shape, "mask.shape": self.mask.shape} - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - if torchvision._is_tracing(): - # nested_tensor_from_tensor_list() does not export well to ONNX - # call _onnx_nested_tensor_from_tensor_list() instead - return _onnx_nested_tensor_from_tensor_list(tensor_list) - - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], : img.shape[2]] = False - else: - raise ValueError("not supported") - return NestedTensor(tensor, mask) - - -# _onnx_nested_tensor_from_tensor_list() is an implementation of -# nested_tensor_from_tensor_list() that is supported by ONNX tracing. -@torch.jit.unused -def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor: - max_size = [] - for i in range(tensor_list[0].dim()): - max_size_i = torch.max( - torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32) - ).to(torch.int64) - max_size.append(max_size_i) - max_size = tuple(max_size) - - # work around for - # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - # m[: img.shape[1], :img.shape[2]] = False - # which is not yet supported in onnx - padded_imgs = [] - padded_masks = [] - for img in tensor_list: - padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))] - padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0])) - padded_imgs.append(padded_img) - - m = torch.zeros_like(img[0], dtype=torch.int, device=img.device) - padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1) - padded_masks.append(padded_mask.to(torch.bool)) - - tensor = torch.stack(padded_imgs) - mask = torch.stack(padded_masks) - - return NestedTensor(tensor, mask=mask) - - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop("force", False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def save_on_master(*args, **kwargs): - if is_main_process(): - torch.save(*args, **kwargs) - - -def init_distributed_mode(args): - if "WORLD_SIZE" in os.environ and os.environ["WORLD_SIZE"] != "": # 'RANK' in os.environ and - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ["WORLD_SIZE"]) - args.gpu = args.local_rank = int(os.environ["LOCAL_RANK"]) - - # launch by torch.distributed.launch - # Single node - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 1 --rank 0 ... - # Multi nodes - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 0 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ... - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 1 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ... - # args.rank = int(os.environ.get('OMPI_COMM_WORLD_RANK')) - # local_world_size = int(os.environ['GPU_PER_NODE_COUNT']) - # args.world_size = args.world_size * local_world_size - # args.gpu = args.local_rank = int(os.environ['LOCAL_RANK']) - # args.rank = args.rank * local_world_size + args.local_rank - print( - "world size: {}, rank: {}, local rank: {}".format( - args.world_size, args.rank, args.local_rank - ) - ) - print(json.dumps(dict(os.environ), indent=2)) - elif "SLURM_PROCID" in os.environ: - args.rank = int(os.environ["SLURM_PROCID"]) - args.gpu = args.local_rank = int(os.environ["SLURM_LOCALID"]) - args.world_size = int(os.environ["SLURM_NPROCS"]) - - print( - "world size: {}, world rank: {}, local rank: {}, device_count: {}".format( - args.world_size, args.rank, args.local_rank, torch.cuda.device_count() - ) - ) - else: - print("Not using distributed mode") - args.distributed = False - args.world_size = 1 - args.rank = 0 - args.local_rank = 0 - return - - print("world_size:{} rank:{} local_rank:{}".format(args.world_size, args.rank, args.local_rank)) - args.distributed = True - torch.cuda.set_device(args.local_rank) - args.dist_backend = "nccl" - print("| distributed init (rank {}): {}".format(args.rank, args.dist_url), flush=True) - - torch.distributed.init_process_group( - backend=args.dist_backend, - world_size=args.world_size, - rank=args.rank, - init_method=args.dist_url, - ) - - print("Before torch.distributed.barrier()") - torch.distributed.barrier() - print("End torch.distributed.barrier()") - setup_for_distributed(args.rank == 0) - - -@torch.no_grad() -def accuracy(output, target, topk=(1,)): - """Computes the precision@k for the specified values of k""" - if target.numel() == 0: - return [torch.zeros([], device=output.device)] - maxk = max(topk) - batch_size = target.size(0) - - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - - res = [] - for k in topk: - correct_k = correct[:k].view(-1).float().sum(0) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - - -@torch.no_grad() -def accuracy_onehot(pred, gt): - """_summary_ - - Args: - pred (_type_): n, c - gt (_type_): n, c - """ - tp = ((pred - gt).abs().sum(-1) < 1e-4).float().sum() - acc = tp / gt.shape[0] * 100 - return acc - - -def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None): - # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor - """ - Equivalent to nn.functional.interpolate, but with support for empty batch sizes. - This will eventually be supported natively by PyTorch, and this - class can go away. - """ - if __torchvision_need_compat_flag < 0.7: - if input.numel() > 0: - return torch.nn.functional.interpolate(input, size, scale_factor, mode, align_corners) - - output_shape = _output_size(2, input, size, scale_factor) - output_shape = list(input.shape[:-2]) + list(output_shape) - return _new_empty_tensor(input, output_shape) - else: - return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners) - - -class color_sys: - def __init__(self, num_colors) -> None: - self.num_colors = num_colors - colors = [] - for i in np.arange(0.0, 360.0, 360.0 / num_colors): - hue = i / 360.0 - lightness = (50 + np.random.rand() * 10) / 100.0 - saturation = (90 + np.random.rand() * 10) / 100.0 - colors.append( - tuple([int(j * 255) for j in colorsys.hls_to_rgb(hue, lightness, saturation)]) - ) - self.colors = colors - - def __call__(self, idx): - return self.colors[idx] - - -def inverse_sigmoid(x, eps=1e-3): - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -def clean_state_dict(state_dict): - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k[:7] == "module.": - k = k[7:] # remove `module.` - new_state_dict[k] = v - return new_state_dict diff --git a/spaces/Web3Daily/WebGPT3/README.md b/spaces/Web3Daily/WebGPT3/README.md deleted file mode 100644 index 235570f33c8eff65ba39454b1ea36dcefbe0e7e8..0000000000000000000000000000000000000000 --- a/spaces/Web3Daily/WebGPT3/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: WebGPT3 -emoji: 🐨 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- -"CSS": [".css"], - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/XzJosh/nine2-Bert-VITS2/text/english_bert_mock.py b/spaces/XzJosh/nine2-Bert-VITS2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nine2-Bert-VITS2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/YONG627/456123/yolov5-code-main/example_request.py b/spaces/YONG627/456123/yolov5-code-main/example_request.py deleted file mode 100644 index 19d8742ffc7cbe0ee89d61026ffe8e8a621cc0ff..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/example_request.py +++ /dev/null @@ -1,33 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Perform test request -""" - -import pprint -import cv2 -import numpy as np -import matplotlib.pyplot as plt - -import requests - -DETECTION_URL = 'http://localhost:5000/v1/object-detection/yolov5s' -IMAGE = 'data/images/zidane.jpg' - -# Read image -# with open(IMAGE, 'rb') as f: -# image_data = f.read() - -img = cv2.imread(IMAGE) - -img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - -img = cv2.imencode(".jpg", img)[1].tobytes() - -response = requests.post(DETECTION_URL, data=img) - -img = cv2.imdecode(np.frombuffer(response.content, dtype=np.uint8), cv2.IMREAD_COLOR) - -plt.imshow(img) -plt.show() - -# pprint.pprint(response) diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py deleted file mode 100644 index c153c42455dc4d6b9c4a532edaac0aa8f4dcca1d..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py +++ /dev/null @@ -1,157 +0,0 @@ -# This code is copied from https://github.com/thomasjpfan/pytorch/blob/401ec389db2c9d2978917a6e4d1101b20340d7e7/torch/optim/lr_scheduler.py - - -# This code is under review at PyTorch and is to be merged eventually to make CLR available to all. -# Tested with pytorch 0.2.0 - -import numpy as np - - -class CyclicLR(object): - """Sets the learning rate of each parameter group according to - cyclical learning rate policy (CLR). The policy cycles the learning - rate between two boundaries with a constant frequency, as detailed in - the paper `Cyclical Learning Rates for Training Neural Networks`_. - The distance between the two boundaries can be scaled on a per-iteration - or per-cycle basis. - Cyclical learning rate policy changes the learning rate after every batch. - `batch_step` should be called after a batch has been used for training. - To resume training, save `last_batch_iteration` and use it to instantiate `CycleLR`. - This class has three built-in policies, as put forth in the paper: - "triangular": - A basic triangular cycle w/ no amplitude scaling. - "triangular2": - A basic triangular cycle that scales initial amplitude by half each cycle. - "exp_range": - A cycle that scales initial amplitude by gamma**(cycle iterations) at each - cycle iteration. - This implementation was adapted from the github repo: `bckenstler/CLR`_ - Args: - optimizer (Optimizer): Wrapped optimizer. - base_lr (float or list): Initial learning rate which is the - lower boundary in the cycle for eachparam groups. - Default: 0.001 - max_lr (float or list): Upper boundaries in the cycle for - each parameter group. Functionally, - it defines the cycle amplitude (max_lr - base_lr). - The lr at any cycle is the sum of base_lr - and some scaling of the amplitude; therefore - max_lr may not actually be reached depending on - scaling function. Default: 0.006 - step_size (int): Number of training iterations per - half cycle. Authors suggest setting step_size - 2-8 x training iterations in epoch. Default: 2000 - mode (str): One of {triangular, triangular2, exp_range}. - Values correspond to policies detailed above. - If scale_fn is not None, this argument is ignored. - Default: 'triangular' - gamma (float): Constant in 'exp_range' scaling function: - gamma**(cycle iterations) - Default: 1.0 - scale_fn (function): Custom scaling policy defined by a single - argument lambda function, where - 0 <= scale_fn(x) <= 1 for all x >= 0. - mode paramater is ignored - Default: None - scale_mode (str): {'cycle', 'iterations'}. - Defines whether scale_fn is evaluated on - cycle number or cycle iterations (training - iterations since start of cycle). - Default: 'cycle' - last_batch_iteration (int): The index of the last batch. Default: -1 - Example: - >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) - >>> scheduler = torch.optim.CyclicLR(optimizer) - >>> data_loader = torch.utils.data.DataLoader(...) - >>> for epoch in range(10): - >>> for batch in data_loader: - >>> scheduler.batch_step() - >>> train_batch(...) - .. _Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 - .. _bckenstler/CLR: https://github.com/bckenstler/CLR - """ - - def __init__(self, optimizer, base_lr=1e-3, max_lr=6e-3, - step_size=2000, mode='triangular', gamma=1., - scale_fn=None, scale_mode='cycle', last_batch_iteration=-1): - - # if not isinstance(optimizer, Optimizer): - # raise TypeError('{} is not an Optimizer'.format( - # type(optimizer).__name__)) - self.optimizer = optimizer - - if isinstance(base_lr, list) or isinstance(base_lr, tuple): - if len(base_lr) != len(optimizer.param_groups): - raise ValueError("expected {} base_lr, got {}".format( - len(optimizer.param_groups), len(base_lr))) - self.base_lrs = list(base_lr) - else: - self.base_lrs = [base_lr] * len(optimizer.param_groups) - - if isinstance(max_lr, list) or isinstance(max_lr, tuple): - if len(max_lr) != len(optimizer.param_groups): - raise ValueError("expected {} max_lr, got {}".format( - len(optimizer.param_groups), len(max_lr))) - self.max_lrs = list(max_lr) - else: - self.max_lrs = [max_lr] * len(optimizer.param_groups) - - self.step_size = step_size - - if mode not in ['triangular', 'triangular2', 'exp_range'] \ - and scale_fn is None: - raise ValueError('mode is invalid and scale_fn is None') - - self.mode = mode - self.gamma = gamma - self.current_lr = None - - if scale_fn is None: - if self.mode == 'triangular': - self.scale_fn = self._triangular_scale_fn - self.scale_mode = 'cycle' - elif self.mode == 'triangular2': - self.scale_fn = self._triangular2_scale_fn - self.scale_mode = 'cycle' - elif self.mode == 'exp_range': - self.scale_fn = self._exp_range_scale_fn - self.scale_mode = 'iterations' - else: - self.scale_fn = scale_fn - self.scale_mode = scale_mode - - self.batch_step(last_batch_iteration + 1) - self.last_batch_iteration = last_batch_iteration - - def batch_step(self, batch_iteration=None): - if batch_iteration is None: - batch_iteration = self.last_batch_iteration + 1 - self.last_batch_iteration = batch_iteration - for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()): - param_group['lr'] = lr - self.current_lr = lr - - def _triangular_scale_fn(self, x): - return 1. - - def _triangular2_scale_fn(self, x): - return 1 / (2. ** (x - 1)) - - def _exp_range_scale_fn(self, x): - return self.gamma ** (x) - - def get_lr(self): - step_size = float(self.step_size) - cycle = np.floor(1 + self.last_batch_iteration / (2 * step_size)) - x = np.abs(self.last_batch_iteration / step_size - 2 * cycle + 1) - - lrs = [] - param_lrs = zip(self.optimizer.param_groups, self.base_lrs, self.max_lrs) - for param_group, base_lr, max_lr in param_lrs: - base_height = (max_lr - base_lr) * np.maximum(0, (1 - x)) - if self.scale_mode == 'cycle': - lr = base_lr + base_height * self.scale_fn(cycle) - else: - lr = base_lr + base_height * self.scale_fn(self.last_batch_iteration) - lrs.append(lr) - return lrs diff --git a/spaces/YuxinJ/Scenimefy/app.py b/spaces/YuxinJ/Scenimefy/app.py deleted file mode 100644 index d2b760244eb531b408ecb54cd413dea42aecd9a9..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/app.py +++ /dev/null @@ -1,158 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import torch -import gradio as gr - -from Scenimefy.options.test_options import TestOptions -from Scenimefy.models import create_model -from Scenimefy.utils.util import tensor2im - -from PIL import Image -import torchvision.transforms as transforms - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -TITLE = ''' - Scene Stylization with
              Scenimefy - ''' -DESCRIPTION = ''' -
              -

              -Gradio Demo for Scenimefy - a model transforming real-life photos into Shinkai-animation-style images. -To use it, simply upload your image, or click one of the examples to load them. -For best outcomes, please pick a natural landscape image similar to the examples below. -Kindly note that our model is trained on 256x256 resolution images, using much higher resolutions might affect its performance. -Read more in our paper. -

              -
              -''' -EXAMPLES = [['0.jpg'], ['1.png'], ['2.jpg'], ['3.png'], ['4.png'], ['5.png'], ['6.jpg'], ['7.png'], ['8.png']] -ARTICLE = r""" -If Scenimefy is helpful, please help to ⭐ the Github Repo. Thank you! -🤟 **Citation** -If our work is useful for your research, please consider citing: -```bibtex -@inproceedings{jiang2023scenimefy, - title={Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation}, - author={Jiang, Yuxin and Jiang, Liming and Yang, Shuai and Loy, Chen Change}, - booktitle={ICCV}, - year={2023} -} -``` -🗞️ **License** -This project is licensed under S-Lab License 1.0. -Redistribution and use for non-commercial purposes should follow this license. -""" - - -model = None - - -def initialize(): - opt = TestOptions().parse() # get test options - # os.environ["CUDA_VISIBLE_DEVICES"] = str(1) - # hard-code some parameters for test - opt.num_threads = 0 # test code only supports num_threads = 1 - opt.batch_size = 1 # test code only supports batch_size = 1 - opt.serial_batches = True # disable data shuffling; comment this line if results on randomly chosen images are needed. - opt.no_flip = True # no flip; comment this line if results on flipped images are needed. - opt.display_id = -1 # no visdom display; the test code saves the results to a HTML file. - - # dataset = create_dataset(opt) # create a dataset given opt.dataset_mode and other options - global model - model = create_model(opt) # create a model given opt.model and other options - - dummy_data = { - 'A': torch.ones(1, 3, 256, 256), - 'B': torch.ones(1, 3, 256, 256), - 'A_paths': 'upload.jpg' - } - - model.data_dependent_initialize(dummy_data) - model.setup(opt) # regular setup: load and print networks; create schedulers - model.parallelize() - return model - - -def __make_power_2(img, base, method=Image.BICUBIC): - ow, oh = img.size - h = int(round(oh / base) * base) - w = int(round(ow / base) * base) - if h == oh and w == ow: - return img - - return img.resize((w, h), method) - - -def get_transform(): - method=Image.BICUBIC - transform_list = [] - # if opt.preprocess == 'none': - transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base=4, method=method))) - transform_list += [transforms.ToTensor()] - transform_list += [transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))] - return transforms.Compose(transform_list) - - -def inference(img): - transform = get_transform() - A = transform(img.convert('RGB')) # A.shape: torch.Size([3, 260, 460]) - A = A.unsqueeze(0) # A.shape: torch.Size([1, 3, 260, 460]) - - upload_data = { - 'A': A, - 'B': torch.ones_like(A), - 'A_paths': 'upload.jpg' - } - - global model - model.set_input(upload_data) # unpack data from data loader - model.test() # run inference - visuals = model.get_current_visuals() - return tensor2im(visuals['fake_B']) - - -def main(): - args = parse_args() - args.device = 'cuda' if torch.cuda.is_available() else 'cpu' - print('*** Now using %s.'%(args.device)) - - global model - model = initialize() - - gr.Interface( - inference, - gr.Image(type="pil", label='Input'), - gr.Image(type="pil", label='Output').style(height=300), - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - examples=EXAMPLES, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share - ) - -if __name__ == '__main__': - main() diff --git a/spaces/aadnk/whisper-webui/src/segments.py b/spaces/aadnk/whisper-webui/src/segments.py deleted file mode 100644 index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000 --- a/spaces/aadnk/whisper-webui/src/segments.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import Any, Dict, List - -import copy - -def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1): - result = [] - - if len(timestamps) == 0: - return result - if max_merge_size is None: - return timestamps - - if padding_left is None: - padding_left = 0 - if padding_right is None: - padding_right = 0 - - processed_time = 0 - current_segment = None - - for i in range(len(timestamps)): - next_segment = timestamps[i] - - delta = next_segment['start'] - processed_time - - # Note that segments can still be longer than the max merge size, they just won't be merged in that case - if current_segment is None or (merge_window is not None and delta > merge_window) \ - or next_segment['end'] - current_segment['start'] > max_merge_size: - # Finish the current segment - if current_segment is not None: - # Add right padding - finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right - current_segment['end'] += finish_padding - delta -= finish_padding - - result.append(current_segment) - - # Start a new segment - current_segment = copy.deepcopy(next_segment) - - # Pad the segment - current_segment['start'] = current_segment['start'] - min(padding_left, delta) - processed_time = current_segment['end'] - - else: - # Merge the segment - current_segment['end'] = next_segment['end'] - processed_time = current_segment['end'] - - # Add the last segment - if current_segment is not None: - current_segment['end'] += padding_right - result.append(current_segment) - - return result \ No newline at end of file diff --git a/spaces/abbbbbbbbbbbbbb/AraPoet/README.md b/spaces/abbbbbbbbbbbbbb/AraPoet/README.md deleted file mode 100644 index e58952fb662cca69245955e5bb1aa741877cc09e..0000000000000000000000000000000000000000 --- a/spaces/abbbbbbbbbbbbbb/AraPoet/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: AraPoet -emoji: ✍️ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: bkhmsi/AraPoet ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/video/optflow.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/video/optflow.py deleted file mode 100644 index 84160f8d6ef9fceb5a2f89e7481593109fc1905d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/video/optflow.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import cv2 -import numpy as np - -from annotator.uniformer.mmcv.arraymisc import dequantize, quantize -from annotator.uniformer.mmcv.image import imread, imwrite -from annotator.uniformer.mmcv.utils import is_str - - -def flowread(flow_or_path, quantize=False, concat_axis=0, *args, **kwargs): - """Read an optical flow map. - - Args: - flow_or_path (ndarray or str): A flow map or filepath. - quantize (bool): whether to read quantized pair, if set to True, - remaining args will be passed to :func:`dequantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - - Returns: - ndarray: Optical flow represented as a (h, w, 2) numpy array - """ - if isinstance(flow_or_path, np.ndarray): - if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2): - raise ValueError(f'Invalid flow with shape {flow_or_path.shape}') - return flow_or_path - elif not is_str(flow_or_path): - raise TypeError(f'"flow_or_path" must be a filename or numpy array, ' - f'not {type(flow_or_path)}') - - if not quantize: - with open(flow_or_path, 'rb') as f: - try: - header = f.read(4).decode('utf-8') - except Exception: - raise IOError(f'Invalid flow file: {flow_or_path}') - else: - if header != 'PIEH': - raise IOError(f'Invalid flow file: {flow_or_path}, ' - 'header does not contain PIEH') - - w = np.fromfile(f, np.int32, 1).squeeze() - h = np.fromfile(f, np.int32, 1).squeeze() - flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2)) - else: - assert concat_axis in [0, 1] - cat_flow = imread(flow_or_path, flag='unchanged') - if cat_flow.ndim != 2: - raise IOError( - f'{flow_or_path} is not a valid quantized flow file, ' - f'its dimension is {cat_flow.ndim}.') - assert cat_flow.shape[concat_axis] % 2 == 0 - dx, dy = np.split(cat_flow, 2, axis=concat_axis) - flow = dequantize_flow(dx, dy, *args, **kwargs) - - return flow.astype(np.float32) - - -def flowwrite(flow, filename, quantize=False, concat_axis=0, *args, **kwargs): - """Write optical flow to file. - - If the flow is not quantized, it will be saved as a .flo file losslessly, - otherwise a jpeg image which is lossy but of much smaller size. (dx and dy - will be concatenated horizontally into a single image if quantize is True.) - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - filename (str): Output filepath. - quantize (bool): Whether to quantize the flow and save it to 2 jpeg - images. If set to True, remaining args will be passed to - :func:`quantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - """ - if not quantize: - with open(filename, 'wb') as f: - f.write('PIEH'.encode('utf-8')) - np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f) - flow = flow.astype(np.float32) - flow.tofile(f) - f.flush() - else: - assert concat_axis in [0, 1] - dx, dy = quantize_flow(flow, *args, **kwargs) - dxdy = np.concatenate((dx, dy), axis=concat_axis) - imwrite(dxdy, filename) - - -def quantize_flow(flow, max_val=0.02, norm=True): - """Quantize flow to [0, 255]. - - After this step, the size of flow will be much smaller, and can be - dumped as jpeg images. - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - max_val (float): Maximum value of flow, values beyond - [-max_val, max_val] will be truncated. - norm (bool): Whether to divide flow values by image width/height. - - Returns: - tuple[ndarray]: Quantized dx and dy. - """ - h, w, _ = flow.shape - dx = flow[..., 0] - dy = flow[..., 1] - if norm: - dx = dx / w # avoid inplace operations - dy = dy / h - # use 255 levels instead of 256 to make sure 0 is 0 after dequantization. - flow_comps = [ - quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy] - ] - return tuple(flow_comps) - - -def dequantize_flow(dx, dy, max_val=0.02, denorm=True): - """Recover from quantized flow. - - Args: - dx (ndarray): Quantized dx. - dy (ndarray): Quantized dy. - max_val (float): Maximum value used when quantizing. - denorm (bool): Whether to multiply flow values with width/height. - - Returns: - ndarray: Dequantized flow. - """ - assert dx.shape == dy.shape - assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1) - - dx, dy = [dequantize(d, -max_val, max_val, 255) for d in [dx, dy]] - - if denorm: - dx *= dx.shape[1] - dy *= dx.shape[0] - flow = np.dstack((dx, dy)) - return flow - - -def flow_warp(img, flow, filling_value=0, interpolate_mode='nearest'): - """Use flow to warp img. - - Args: - img (ndarray, float or uint8): Image to be warped. - flow (ndarray, float): Optical Flow. - filling_value (int): The missing pixels will be set with filling_value. - interpolate_mode (str): bilinear -> Bilinear Interpolation; - nearest -> Nearest Neighbor. - - Returns: - ndarray: Warped image with the same shape of img - """ - warnings.warn('This function is just for prototyping and cannot ' - 'guarantee the computational efficiency.') - assert flow.ndim == 3, 'Flow must be in 3D arrays.' - height = flow.shape[0] - width = flow.shape[1] - channels = img.shape[2] - - output = np.ones( - (height, width, channels), dtype=img.dtype) * filling_value - - grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2) - dx = grid[:, :, 0] + flow[:, :, 1] - dy = grid[:, :, 1] + flow[:, :, 0] - sx = np.floor(dx).astype(int) - sy = np.floor(dy).astype(int) - valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1) - - if interpolate_mode == 'nearest': - output[valid, :] = img[dx[valid].round().astype(int), - dy[valid].round().astype(int), :] - elif interpolate_mode == 'bilinear': - # dirty walkround for integer positions - eps_ = 1e-6 - dx, dy = dx + eps_, dy + eps_ - left_top_ = img[np.floor(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - left_down_ = img[np.ceil(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - right_top_ = img[np.floor(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - right_down_ = img[np.ceil(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_ - else: - raise NotImplementedError( - 'We only support interpolation modes of nearest and bilinear, ' - f'but got {interpolate_mode}.') - return output.astype(img.dtype) - - -def flow_from_bytes(content): - """Read dense optical flow from bytes. - - .. note:: - This load optical flow function works for FlyingChairs, FlyingThings3D, - Sintel, FlyingChairsOcc datasets, but cannot load the data from - ChairsSDHom. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - ndarray: Loaded optical flow with the shape (H, W, 2). - """ - - # header in first 4 bytes - header = content[:4] - if header.decode('utf-8') != 'PIEH': - raise Exception('Flow file header does not contain PIEH') - # width in second 4 bytes - width = np.frombuffer(content[4:], np.int32, 1).squeeze() - # height in third 4 bytes - height = np.frombuffer(content[8:], np.int32, 1).squeeze() - # after first 12 bytes, all bytes are flow - flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape( - (height, width, 2)) - - return flow - - -def sparse_flow_from_bytes(content): - """Read the optical flow in KITTI datasets from bytes. - - This function is modified from RAFT load the `KITTI datasets - `_. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2) - and flow valid mask with the shape (H, W). - """ # nopa - - content = np.frombuffer(content, np.uint8) - flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) - flow = flow[:, :, ::-1].astype(np.float32) - # flow shape (H, W, 2) valid shape (H, W) - flow, valid = flow[:, :, :2], flow[:, :, 2] - flow = (flow - 2**15) / 64.0 - return flow, valid diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/cityscapes.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/cityscapes.py deleted file mode 100644 index 71eead87e7f4e511c0cb59e69c3a599832ada0e4..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/cityscapes.py +++ /dev/null @@ -1,334 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/cityscapes.py # noqa -# and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - -import glob -import os -import os.path as osp -import tempfile -from collections import OrderedDict - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -from mmcv.utils import print_log - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class CityscapesDataset(CocoDataset): - - CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = img_info['id'] - ann_ids = self.coco.getAnnIds(imgIds=[img_id]) - ann_info = self.coco.loadAnns(ann_ids) - all_iscrowd = all([_['iscrowd'] for _ in ann_info]) - if self.filter_empty_gt and (self.img_ids[i] not in ids_in_cat - or all_iscrowd): - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - img_info (dict): Image info of an image. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, \ - bboxes_ignore, labels, masks, seg_map. \ - "masks" are already decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann['segmentation']) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=img_info['segm_file']) - - return ann - - def results2txt(self, results, outfile_prefix): - """Dump the detection results to a txt file. - - Args: - results (list[list | tuple]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. - If the prefix is "somepath/xxx", - the txt files will be named "somepath/xxx.txt". - - Returns: - list[str]: Result txt files which contains corresponding \ - instance segmentation images. - """ - try: - import cityscapesscripts.helpers.labels as CSLabels - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - result_files = [] - os.makedirs(outfile_prefix, exist_ok=True) - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - filename = self.data_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - pred_txt = osp.join(outfile_prefix, basename + '_pred.txt') - - bbox_result, segm_result = result - bboxes = np.vstack(bbox_result) - # segm results - if isinstance(segm_result, tuple): - # Some detectors use different scores for bbox and mask, - # like Mask Scoring R-CNN. Score of segm will be used instead - # of bbox score. - segms = mmcv.concat_list(segm_result[0]) - mask_score = segm_result[1] - else: - # use bbox score for mask score - segms = mmcv.concat_list(segm_result) - mask_score = [bbox[-1] for bbox in bboxes] - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - assert len(bboxes) == len(segms) == len(labels) - num_instances = len(bboxes) - prog_bar.update() - with open(pred_txt, 'w') as fout: - for i in range(num_instances): - pred_class = labels[i] - classes = self.CLASSES[pred_class] - class_id = CSLabels.name2label[classes].id - score = mask_score[i] - mask = maskUtils.decode(segms[i]).astype(np.uint8) - png_filename = osp.join(outfile_prefix, - basename + f'_{i}_{classes}.png') - mmcv.imwrite(mask, png_filename) - fout.write(f'{osp.basename(png_filename)} {class_id} ' - f'{score}\n') - result_files.append(pred_txt) - - return result_files - - def format_results(self, results, txtfile_prefix=None): - """Format the results to txt (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of txt files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving txt/png files when txtfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if txtfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - txtfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2txt(results, txtfile_prefix) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - outfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=np.arange(0.5, 0.96, 0.05)): - """Evaluation in Cityscapes/COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - outfile_prefix (str | None): The prefix of output file. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with COCO protocol, it would be the - prefix of output json file. For example, the metric is 'bbox' - and 'segm', then json files would be "a/b/prefix.bbox.json" and - "a/b/prefix.segm.json". - If results are evaluated with cityscapes protocol, it would be - the prefix of output txt/png files. The output files would be - png images under folder "a/b/prefix/xxx/" and the file name of - images would be written into a txt file - "a/b/prefix/xxx_pred.txt", where "xxx" is the video name of - cityscapes. If not specified, a temp file will be created. - Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float]): IoU threshold used for evaluating - recalls. If set to a list, the average recall of all IoUs will - also be computed. Default: 0.5. - - Returns: - dict[str, float]: COCO style evaluation metric or cityscapes mAP \ - and AP@50. - """ - eval_results = dict() - - metrics = metric.copy() if isinstance(metric, list) else [metric] - - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, outfile_prefix, logger)) - metrics.remove('cityscapes') - - # left metrics are all coco metric - if len(metrics) > 0: - # create CocoDataset with CityscapesDataset annotation - self_coco = CocoDataset(self.ann_file, self.pipeline.transforms, - None, self.data_root, self.img_prefix, - self.seg_prefix, self.proposal_file, - self.test_mode, self.filter_empty_gt) - # TODO: remove this in the future - # reload annotations of correct class - self_coco.CLASSES = self.CLASSES - self_coco.data_infos = self_coco.load_annotations(self.ann_file) - eval_results.update( - self_coco.evaluate(results, metrics, logger, outfile_prefix, - classwise, proposal_nums, iou_thrs)) - - return eval_results - - def _evaluate_cityscapes(self, results, txtfile_prefix, logger): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of output txt file - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: Cityscapes evaluation results, contains 'mAP' \ - and 'AP@50'. - """ - - try: - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, txtfile_prefix) - - if tmp_dir is None: - result_dir = osp.join(txtfile_prefix, 'results') - else: - result_dir = osp.join(tmp_dir.name, 'results') - - eval_results = OrderedDict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - # set global states in cityscapes evaluation API - CSEval.args.cityscapesPath = os.path.join(self.img_prefix, '../..') - CSEval.args.predictionPath = os.path.abspath(result_dir) - CSEval.args.predictionWalk = None - CSEval.args.JSONOutput = False - CSEval.args.colorized = False - CSEval.args.gtInstancesFile = os.path.join(result_dir, - 'gtInstances.json') - CSEval.args.groundTruthSearch = os.path.join( - self.img_prefix.replace('leftImg8bit', 'gtFine'), - '*/*_gtFine_instanceIds.png') - - groundTruthImgList = glob.glob(CSEval.args.groundTruthSearch) - assert len(groundTruthImgList), 'Cannot find ground truth images' \ - f' in {CSEval.args.groundTruthSearch}.' - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(CSEval.getPrediction(gt, CSEval.args)) - CSEval_results = CSEval.evaluateImgLists(predictionImgList, - groundTruthImgList, - CSEval.args)['averages'] - - eval_results['mAP'] = CSEval_results['allAp'] - eval_results['AP@50'] = CSEval_results['allAp50%'] - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/spaces/ahdsoft/Persian-Automatic-Speech-Recognition/app.py b/spaces/ahdsoft/Persian-Automatic-Speech-Recognition/app.py deleted file mode 100644 index 117a3f59eabcc33feec50128696e5ca8ef00df96..0000000000000000000000000000000000000000 --- a/spaces/ahdsoft/Persian-Automatic-Speech-Recognition/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import streamlit as st -from PIL import Image -import os -import requests - - -ASR_API = os.environ['ASR_API'] - -def request_to_asr_service(audiofile): - - # file_path = "/media/mohammadkrb/hddExt/personal_projects/vidabia/audio_tests/epit_sample.mp3" - # file_data = open(file_path, 'rb') - try: - - files = {'file': (audiofile)} - - response = requests.post(ASR_API, files=files) - return response.json() - except: - st.info('ASR Service not worked!') - -st.set_page_config( - page_title="Automatic Speech Recognition", - page_icon="🗣", - layout="centered", - initial_sidebar_state="auto", -) - -upload_path = "uploads/" -download_path = "downloads/" -os.makedirs(upload_path, exist_ok=True) -os.makedirs(download_path, exist_ok=True) -# @st.cache(persist=True,allow_output_mutation=True,show_spinner=False,suppress_st_warning=True) -# def asr_inference_wav2vec2(uploaded_file): -# asr = Wave2Vec2Inference("facebook/wav2vec2-base-960h") -# text = asr.file_to_text(uploaded_file) -# return text - -@st.cache(persist=True,allow_output_mutation=True,show_spinner=False,suppress_st_warning=True) -def save_text(text, downloaded_txt_file): - with open(downloaded_txt_file, 'w') as outtxt: - outtxt.write(text) - print(downloaded_txt_file) - -@st.cache(persist=True,allow_output_mutation=True,show_spinner=False,suppress_st_warning=True) -def download_success(): - st.balloons() - st.success('✅ Download Successful !!') - -main_image = Image.open('static/main_banner.png') - -st.image(main_image,use_column_width='auto') -st.title("🗣 Automatic Speech Recognition") -st.info('✨ Supports ALL Audio Formats (mp3, wav, ogg, ...).') - -uploaded_file = st.file_uploader("Upload audio file", type=["wav"]) -if uploaded_file is not None: - with open(os.path.join(upload_path,uploaded_file.name),"wb") as f: - f.write((uploaded_file).getbuffer()) - with st.spinner(f"Converting speech to text... 💫"): - resp = request_to_asr_service(uploaded_file) - text = resp['transcript'] - # text = asr_inference_wav2vec2(upload_path + uploaded_file.name) - st.info(text) - downloaded_txt_file = os.path.abspath(os.path.join(download_path,str("processed_"+uploaded_file.name.split(".")[0] + ".txt"))) - save_text(text, downloaded_txt_file) - with open(downloaded_txt_file, "rb") as file: - if st.download_button( - label="Download ASR Output 🗣", - data=file, - file_name=str("ASR_output_"+uploaded_file.name.split(".")[0]+ ".txt"), - mime='text/plain' - ): - download_success() -# else: - # st.warning("Please upload your file. Any other audio format is currently not supported") - -st.markdown("

              Made with ❤️ by AHD Co

              ", unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/airely/bingai1/Dockerfile b/spaces/airely/bingai1/Dockerfile deleted file mode 100644 index 54661634f1e20c73a342e4be056ea0a08573c9c7..0000000000000000000000000000000000000000 --- a/spaces/airely/bingai1/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM node:18 -RUN git clone https://github.com/weaigc/bingo.git -WORKDIR "bingo" -RUN npm i -RUN npm run build -EXPOSE 3000 -CMD ["npm", "run", "start"] \ No newline at end of file diff --git a/spaces/akhaliq/JoJoGAN/e4e/configs/paths_config.py b/spaces/akhaliq/JoJoGAN/e4e/configs/paths_config.py deleted file mode 100644 index 4604f6063b8125364a52a492de52fcc54004f373..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/e4e/configs/paths_config.py +++ /dev/null @@ -1,28 +0,0 @@ -dataset_paths = { - # Face Datasets (In the paper: FFHQ - train, CelebAHQ - test) - 'ffhq': '', - 'celeba_test': '', - - # Cars Dataset (In the paper: Stanford cars) - 'cars_train': '', - 'cars_test': '', - - # Horse Dataset (In the paper: LSUN Horse) - 'horse_train': '', - 'horse_test': '', - - # Church Dataset (In the paper: LSUN Church) - 'church_train': '', - 'church_test': '', - - # Cats Dataset (In the paper: LSUN Cat) - 'cats_train': '', - 'cats_test': '' -} - -model_paths = { - 'stylegan_ffhq': 'pretrained_models/stylegan2-ffhq-config-f.pt', - 'ir_se50': 'pretrained_models/model_ir_se50.pth', - 'shape_predictor': 'pretrained_models/shape_predictor_68_face_landmarks.dat', - 'moco': 'pretrained_models/moco_v2_800ep_pretrain.pth' -} diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Parser.pod b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Parser.pod deleted file mode 100644 index b8cd46ec91963eec25511df32d5e9d1f8aa1b5cb..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Parser.pod +++ /dev/null @@ -1,67 +0,0 @@ -=head1 NAME - -XML::DOM::Parser - An XML::Parser that builds XML::DOM document structures - -=head1 SYNOPSIS - - use XML::DOM; - - my $parser = new XML::DOM::Parser; - my $doc = $parser->parsefile ("file.xml"); - $doc->dispose; # Avoid memory leaks - cleanup circular references - -=head1 DESCRIPTION - -XML::DOM::Parser extends L - -The XML::Parser module was written by Clark Cooper and -is built on top of XML::Parser::Expat, -which is a lower level interface to James Clark's expat library. - -XML::DOM::Parser parses XML strings or files -and builds a data structure that conforms to the API of the Document Object -Model as described at L. -See the L manpage for other additional properties of the -XML::DOM::Parser class. -Note that the 'Style' property should not be used (it is set internally.) - -The XML::Parser B option is more or less supported, in that it will -generate EntityReference objects whenever an entity reference is encountered -in character data. I'm not sure how useful this is. Any comments are welcome. - -As described in the synopsis, when you create an XML::DOM::Parser object, -the parse and parsefile methods create an L object -from the specified input. This Document object can then be examined, modified and -written back out to a file or converted to a string. - -When using XML::DOM with XML::Parser version 2.19 and up, setting the -XML::DOM::Parser option B to 1 will store CDATASections in -CDATASection nodes, instead of converting them to Text nodes. -Subsequent CDATASection nodes will be merged into one. Let me know if this -is a problem. - -=head1 Using LWP to parse URLs - -The parsefile() method now also supports URLs, e.g. I. -It uses LWP to download the file and then calls parse() on the resulting string. -By default it will use a L that is created as follows: - - use LWP::UserAgent; - $LWP_USER_AGENT = LWP::UserAgent->new; - $LWP_USER_AGENT->env_proxy; - -Note that env_proxy reads proxy settings from environment variables, which is what I need to -do to get thru our firewall. If you want to use a different LWP::UserAgent, you can either set -it globally with: - - XML::DOM::Parser::set_LWP_UserAgent ($my_agent); - -or, you can specify it for a specific XML::DOM::Parser by passing it to the constructor: - - my $parser = new XML::DOM::Parser (LWP_UserAgent => $my_agent); - -Currently, LWP is used when the filename (passed to parsefile) starts with one of -the following URL schemes: http, https, ftp, wais, gopher, or file (followed by a colon.) -If I missed one, please let me know. - -The LWP modules are part of libwww-perl which is available at CPAN. diff --git a/spaces/aksj/Sea_Shanty/app.py b/spaces/aksj/Sea_Shanty/app.py deleted file mode 100644 index 5b7b4746f057b384e92c448105d5ffa83851da90..0000000000000000000000000000000000000000 --- a/spaces/aksj/Sea_Shanty/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import os - -import numpy as np -import gradio as gr -from sklearn.feature_extraction.text import TfidfVectorizer -from sklearn.metrics.pairwise import cosine_similarity - -shanty=os.environ.get('SHANTY') -def compute_cosine_similarity(text1, text2): - # Initialize the TfidfVectorizer - tfidf_vectorizer = TfidfVectorizer() - - # Fit and transform the texts - tfidf_matrix = tfidf_vectorizer.fit_transform([text1, text2]) - - # Compute the cosine similarity - similarity_score = cosine_similarity(tfidf_matrix[0:1], tfidf_matrix[1:2]) - - return similarity_score[0][0] - -def text_similarity(text): - score= compute_cosine_similarity(shanty,text) - return score*100 - -with gr.Blocks() as demo: - gr.Markdown("# Guess the lyrics of the sea shanty! \n ## Each two seconds of video represents a line") - video=gr.PlayableVideo("final_video.mp4") - inp=gr.Textbox(placeholder="Enter lyrics of sea shanty!",label="Prediction") - - out=gr.Textbox(label="Your points") - inp.change(text_similarity,inp,out) -demo.launch(show_api=False) \ No newline at end of file diff --git a/spaces/ali-ghamdan/deoldify/setup.py b/spaces/ali-ghamdan/deoldify/setup.py deleted file mode 100644 index 2e008ded9f468399c645ca45c4ada90acb6d3d54..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/setup.py +++ /dev/null @@ -1,40 +0,0 @@ -from setuptools import setup, find_packages - - -def get_description(): - return "Deep Learning library for colorizing and restoring old images and video" - - -# def get_long_description(): -# with open("README.md") as f: -# return f.read() - - -def get_requirements(): - with open("requirements.txt") as f: - return f.read().splitlines() - - -setup( - name="DeOldify", - version="0.0.1", - packages=find_packages(exclude=["tests"]), - url="https://github.com/jantic/DeOldify", - license="MIT License", - description=get_description(), - # long_description=get_long_description(), - # long_description_content_type="text/markdown", - classifiers=[ - "Development Status :: 4 - Beta", - "Framework :: Jupyter", - "Intended Audience :: Developers", - "Intended Audience :: Science/Research", - "License :: OSI Approved :: MIT License", - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - "Topic :: Software Development :: Libraries :: Python Modules", - ], - install_requires=get_requirements(), - python_requires=">=3.6", -) diff --git a/spaces/aliabd/SummerTime/model/query_based/__init__.py b/spaces/aliabd/SummerTime/model/query_based/__init__.py deleted file mode 100644 index 64940297f17e93a966bf7efba25308682eec0cd4..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/query_based/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .bm25_model import BM25SummModel -from .tf_idf_model import TFIDFSummModel diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/user_model_code/train.sh b/spaces/alistairmcleay/cambridge-masters-project/scripts/user_model_code/train.sh deleted file mode 100644 index 922b069e358fe579fd43f315fe3a8525edc17293..0000000000000000000000000000000000000000 --- a/spaces/alistairmcleay/cambridge-masters-project/scripts/user_model_code/train.sh +++ /dev/null @@ -1,51 +0,0 @@ -experiment=$1 - -# common setup -wandb_train_run_name="Full-user-model-training" -bs=16 # batch size for training -grad_step=2 # accumulated gradient steps -max_epoch=8 # max epoch for training -data_dir="./data/preprocessed/user_model" -train_size=-1 # number of examples used for training, -1 means all -eval_size=-1 # number of examples ued for evaluation, -1 means all - - - -if [[ "$experiment" == "SGD" ]]; then - echo "Conduct experiment with SGD dataset" - job_name='SGD-full' - data_list="sgd" # 165k training examples - eval_interval=50000 # evaluation interval - -elif [[ "$experiment" == "MultiWOZ" ]]; then - echo "Conduct experiment with MulwiWOZ dataset" - job_name='MultiWOZ-full' - data_list="multiwoz" # 56k training examples - eval_interval=20000 - -elif [[ "$experiment" == "Joint" ]]; then - echo "Conduct experiment with SGD + MulwiWOZ dataset" - job_name='Joint-full' - data_list="sgd multiwoz" # 221k training examples - eval_interval=70000 - -else - echo "Unrecognised argument" - exit -fi - -mkdir -p checkpoint log -checkpoint='checkpoint/'$job_name -log='log/'$job_name'.log' -python ./scripts/user_model_code/main_user_model.py --mode='training' \ - --wandb_train_run_name=$wandb_train_run_name \ - --model_name=$job_name \ - --checkpoint=$checkpoint \ - --data_dir=$data_dir \ - --data_list $data_list \ - --train_size=$train_size \ - --eval_size=$eval_size \ - --eval_interval=$eval_interval \ - --gradient_accumulation_steps=$grad_step \ - --train_batch_size=$bs \ - --max_epoch=$max_epoch diff --git a/spaces/amanatid/Adi_The_ArxivGPT_with_Voice/CustomEngine.py b/spaces/amanatid/Adi_The_ArxivGPT_with_Voice/CustomEngine.py deleted file mode 100644 index 0271d1dc8c43817da3dd13c5fe755a97f078314e..0000000000000000000000000000000000000000 --- a/spaces/amanatid/Adi_The_ArxivGPT_with_Voice/CustomEngine.py +++ /dev/null @@ -1,99 +0,0 @@ -from __future__ import annotations -from typing import List, Optional, Tuple -from llama_index import download_loader,VectorStoreIndex,TreeIndex,KeywordTableIndex, SimpleDirectoryReader,GPTListIndex -from langchain.agents import initialize_agent, Tool -from langchain.llms import OpenAI -from langchain.chains.conversation.memory import ConversationBufferMemory -from langchain.embeddings import OpenAIEmbeddings - -#https://betterprogramming.pub/llamaindex-how-to-use-index-correctly-6f928b8944c6 - -class DefaultQueryEngine(): - def __init__(self, index): - self._index = index - self._query_engine = index.as_query_engine() - - @property - def index(self): - """Returns the index engine.""" - return self._index - - @property - def query_engine(self): - """Returns the query engine.""" - return self._query_engine - - def response(self, user_input: str): - """Returns the list response index query engine.""" - return self._query_engine.query(user_input) - - -class ListIndexEngine(): - #https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/retriever/retriever_modes.html - def __init__(self, documents: List[Document] ,params:Optional[boolean]=False, retriever_mode: Optional[retriever_mode]= "embedding", verbose: Optional[boolean]=True): - self._index = GPTListIndex.from_documents(documents) - if params: - self._query_engine =GPTListIndex.from_documents(documents).as_query_engine(retriever_mode=retriever_mode, verbose=verbose) - else: - self._query_engine = GPTListIndex.from_documents(documents).as_query_engine() - - @property - def index(self): - """Returns the index engine.""" - return self._index - - @property - def query_engine(self): - """Returns the list index query engine.""" - return self._query_engine - - def response(self, user_input: str): - """Returns the list response index query engine.""" - return self._query_engine.query(user_input) - -class TreeIndexEngine(): - #https://gpt-index.readthedocs.io/en/v0.6.31/how_to/query_engine/response_modes.html - #https: // gpt - index.readthedocs.io / en / latest / core_modules / query_modules / retriever / retriever_modes.html - - def __init__(self, documents: List[Document],build_tree:Optional[boolean]=True, child_branch_factor:Optional[int]=1, - retriever_mode: Optional[retriever_mode]= "all_leaf", - response_mode: Optional[ response_mode]='tree_summarize'): #summarizing a collection of documents - - self._index = TreeIndex.from_documents(documents,build_tree= build_tree) - if build_tree == False: - self._query_engine = TreeIndex.from_documents(documents,build_tree= build_tree).\ - as_query_engine( retriever_mode=retriever_mode, response_mode = response_mode) - else: - self._query_engine = TreeIndex.from_documents(documents).as_query_engine() - @property - def index(self): - """Returns the list query engine.""" - return self._index - - @property - def query_engine(self): - return self._query_engine - - - def response(self, user_input:str): - return self._query_engine.query(user_input) - - - - - -class VectorIndexEngine(): - def __init__(self, documents: List[Document]): - self._index = VectorStoreIndex.from_documents(documents) - - @property - def index(self): - """Returns the list query engine.""" - return self._index - - def query_engine(self): - return self._index.as_query_engine() - - def response(self, user_input:str): - return self._index.as_query_engine().query(user_input) - diff --git a/spaces/amanmibra/void-demo-aisf/gradio_app.py b/spaces/amanmibra/void-demo-aisf/gradio_app.py deleted file mode 100644 index 57a1a0ce26b0a403423137ca6a56293b0f96e6df..0000000000000000000000000000000000000000 --- a/spaces/amanmibra/void-demo-aisf/gradio_app.py +++ /dev/null @@ -1,30 +0,0 @@ -import torch -import gradio as gr - -from cnn import CNNetwork -from server.preprocess import process_raw_wav, _wav_to_spec - -model = CNNetwork() -state_dict = torch.load('models/void_20230522_223553.pth') -model.load_state_dict(state_dict) - -LABELS = ["shafqat", "aman", "jake"] - - -def greet(input): - sr, wav = input - - wav = torch.tensor([wav]).float() - wav = process_raw_wav(wav, sr, 48000, 3) - wav = _wav_to_spec(wav, 48000) - - model_input = wav.unsqueeze(0) - output = model(model_input) - print(output) - - prediction_index = torch.argmax(output, 1).item() - return LABELS[prediction_index] - -demo = gr.Interface(fn=greet, inputs="mic", outputs="text") - -demo.launch() \ No newline at end of file diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/LDSR/scripts/ldsr_model.py b/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/LDSR/scripts/ldsr_model.py deleted file mode 100644 index b8cff29b9f4ca56e3a9f4b1ac8e150abb1a0ff30..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/LDSR/scripts/ldsr_model.py +++ /dev/null @@ -1,69 +0,0 @@ -import os -import sys -import traceback - -from basicsr.utils.download_util import load_file_from_url - -from modules.upscaler import Upscaler, UpscalerData -from ldsr_model_arch import LDSR -from modules import shared, script_callbacks -import sd_hijack_autoencoder, sd_hijack_ddpm_v1 - - -class UpscalerLDSR(Upscaler): - def __init__(self, user_path): - self.name = "LDSR" - self.user_path = user_path - self.model_url = "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" - self.yaml_url = "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1" - super().__init__() - scaler_data = UpscalerData("LDSR", None, self) - self.scalers = [scaler_data] - - def load_model(self, path: str): - # Remove incorrect project.yaml file if too big - yaml_path = os.path.join(self.model_path, "project.yaml") - old_model_path = os.path.join(self.model_path, "model.pth") - new_model_path = os.path.join(self.model_path, "model.ckpt") - safetensors_model_path = os.path.join(self.model_path, "model.safetensors") - if os.path.exists(yaml_path): - statinfo = os.stat(yaml_path) - if statinfo.st_size >= 10485760: - print("Removing invalid LDSR YAML file.") - os.remove(yaml_path) - if os.path.exists(old_model_path): - print("Renaming model from model.pth to model.ckpt") - os.rename(old_model_path, new_model_path) - if os.path.exists(safetensors_model_path): - model = safetensors_model_path - else: - model = load_file_from_url(url=self.model_url, model_dir=self.model_path, - file_name="model.ckpt", progress=True) - yaml = load_file_from_url(url=self.yaml_url, model_dir=self.model_path, - file_name="project.yaml", progress=True) - - try: - return LDSR(model, yaml) - - except Exception: - print("Error importing LDSR:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - return None - - def do_upscale(self, img, path): - ldsr = self.load_model(path) - if ldsr is None: - print("NO LDSR!") - return img - ddim_steps = shared.opts.ldsr_steps - return ldsr.super_resolution(img, ddim_steps, self.scale) - - -def on_ui_settings(): - import gradio as gr - - shared.opts.add_option("ldsr_steps", shared.OptionInfo(100, "LDSR processing steps. Lower = faster", gr.Slider, {"minimum": 1, "maximum": 200, "step": 1}, section=('upscaling', "Upscaling"))) - shared.opts.add_option("ldsr_cached", shared.OptionInfo(False, "Cache LDSR model in memory", gr.Checkbox, {"interactive": True}, section=('upscaling', "Upscaling"))) - - -script_callbacks.on_ui_settings(on_ui_settings) diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/synthesize.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/synthesize.py deleted file mode 100644 index ddfe35d29d1271cc7eb8bc199bc09386c783401a..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/synthesize.py +++ /dev/null @@ -1,541 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -import argparse -import contextlib -import sys -from argparse import RawTextHelpFormatter - -# pylint: disable=redefined-outer-name, unused-argument -from pathlib import Path - -description = """ -Synthesize speech on command line. - -You can either use your trained model or choose a model from the provided list. - -If you don't specify any models, then it uses LJSpeech based English model. - -#### Single Speaker Models - -- List provided models: - - ``` - $ tts --list_models - ``` - -- Get model info (for both tts_models and vocoder_models): - - - Query by type/name: - The model_info_by_name uses the name as it from the --list_models. - ``` - $ tts --model_info_by_name "///" - ``` - For example: - ``` - $ tts --model_info_by_name tts_models/tr/common-voice/glow-tts - $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2 - ``` - - Query by type/idx: - The model_query_idx uses the corresponding idx from --list_models. - - ``` - $ tts --model_info_by_idx "/" - ``` - - For example: - - ``` - $ tts --model_info_by_idx tts_models/3 - ``` - - - Query info for model info by full name: - ``` - $ tts --model_info_by_name "///" - ``` - -- Run TTS with default models: - - ``` - $ tts --text "Text for TTS" --out_path output/path/speech.wav - ``` - -- Run TTS and pipe out the generated TTS wav file data: - - ``` - $ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay - ``` - -- Run TTS and define speed factor to use for 🐸Coqui Studio models, between 0.0 and 2.0: - - ``` - $ tts --text "Text for TTS" --model_name "coqui_studio///" --speed 1.2 --out_path output/path/speech.wav - ``` - -- Run a TTS model with its default vocoder model: - - ``` - $ tts --text "Text for TTS" --model_name "///" --out_path output/path/speech.wav - ``` - - For example: - - ``` - $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav - ``` - -- Run with specific TTS and vocoder models from the list: - - ``` - $ tts --text "Text for TTS" --model_name "///" --vocoder_name "///" --out_path output/path/speech.wav - ``` - - For example: - - ``` - $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav - ``` - -- Run your own TTS model (Using Griffin-Lim Vocoder): - - ``` - $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav - ``` - -- Run your own TTS and Vocoder models: - - ``` - $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav - --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json - ``` - -#### Multi-speaker Models - -- List the available speakers and choose a among them: - - ``` - $ tts --model_name "//" --list_speaker_idxs - ``` - -- Run the multi-speaker TTS model with the target speaker ID: - - ``` - $ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "//" --speaker_idx - ``` - -- Run your own multi-speaker TTS model: - - ``` - $ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx - ``` - -### Voice Conversion Models - -``` -$ tts --out_path output/path/speech.wav --model_name "//" --source_wav --target_wav -``` -""" - - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - if v.lower() in ("no", "false", "f", "n", "0"): - return False - raise argparse.ArgumentTypeError("Boolean value expected.") - - -def main(): - parser = argparse.ArgumentParser( - description=description.replace(" ```\n", ""), - formatter_class=RawTextHelpFormatter, - ) - - parser.add_argument( - "--list_models", - type=str2bool, - nargs="?", - const=True, - default=False, - help="list available pre-trained TTS and vocoder models.", - ) - - parser.add_argument( - "--model_info_by_idx", - type=str, - default=None, - help="model info using query format: /", - ) - - parser.add_argument( - "--model_info_by_name", - type=str, - default=None, - help="model info using query format: ///", - ) - - parser.add_argument("--text", type=str, default=None, help="Text to generate speech.") - - # Args for running pre-trained TTS models. - parser.add_argument( - "--model_name", - type=str, - default="tts_models/en/ljspeech/tacotron2-DDC", - help="Name of one of the pre-trained TTS models in format //", - ) - parser.add_argument( - "--vocoder_name", - type=str, - default=None, - help="Name of one of the pre-trained vocoder models in format //", - ) - - # Args for running custom models - parser.add_argument("--config_path", default=None, type=str, help="Path to model config file.") - parser.add_argument( - "--model_path", - type=str, - default=None, - help="Path to model file.", - ) - parser.add_argument( - "--out_path", - type=str, - default="tts_output.wav", - help="Output wav file path.", - ) - parser.add_argument("--use_cuda", type=bool, help="Run model on CUDA.", default=False) - parser.add_argument("--device", type=str, help="Device to run model on.", default="cpu") - parser.add_argument( - "--vocoder_path", - type=str, - help="Path to vocoder model file. If it is not defined, model uses GL as vocoder. Please make sure that you installed vocoder library before (WaveRNN).", - default=None, - ) - parser.add_argument("--vocoder_config_path", type=str, help="Path to vocoder model config file.", default=None) - parser.add_argument( - "--encoder_path", - type=str, - help="Path to speaker encoder model file.", - default=None, - ) - parser.add_argument("--encoder_config_path", type=str, help="Path to speaker encoder config file.", default=None) - - # args for coqui studio - parser.add_argument( - "--cs_model", - type=str, - help="Name of the 🐸Coqui Studio model. Available models are `XTTS`, `V1`.", - ) - parser.add_argument( - "--emotion", - type=str, - help="Emotion to condition the model with. Only available for 🐸Coqui Studio `V1` model.", - default=None, - ) - parser.add_argument( - "--language", - type=str, - help="Language to condition the model with. Only available for 🐸Coqui Studio `XTTS` model.", - default=None, - ) - parser.add_argument( - "--pipe_out", - help="stdout the generated TTS wav file for shell pipe.", - type=str2bool, - nargs="?", - const=True, - default=False, - ) - parser.add_argument( - "--speed", - type=float, - help="Speed factor to use for 🐸Coqui Studio models, between 0.0 and 2.0.", - default=None, - ) - - # args for multi-speaker synthesis - parser.add_argument("--speakers_file_path", type=str, help="JSON file for multi-speaker model.", default=None) - parser.add_argument("--language_ids_file_path", type=str, help="JSON file for multi-lingual model.", default=None) - parser.add_argument( - "--speaker_idx", - type=str, - help="Target speaker ID for a multi-speaker TTS model.", - default=None, - ) - parser.add_argument( - "--language_idx", - type=str, - help="Target language ID for a multi-lingual TTS model.", - default=None, - ) - parser.add_argument( - "--speaker_wav", - nargs="+", - help="wav file(s) to condition a multi-speaker TTS model with a Speaker Encoder. You can give multiple file paths. The d_vectors is computed as their average.", - default=None, - ) - parser.add_argument("--gst_style", help="Wav path file for GST style reference.", default=None) - parser.add_argument( - "--capacitron_style_wav", type=str, help="Wav path file for Capacitron prosody reference.", default=None - ) - parser.add_argument("--capacitron_style_text", type=str, help="Transcription of the reference.", default=None) - parser.add_argument( - "--list_speaker_idxs", - help="List available speaker ids for the defined multi-speaker model.", - type=str2bool, - nargs="?", - const=True, - default=False, - ) - parser.add_argument( - "--list_language_idxs", - help="List available language ids for the defined multi-lingual model.", - type=str2bool, - nargs="?", - const=True, - default=False, - ) - # aux args - parser.add_argument( - "--save_spectogram", - type=bool, - help="If true save raw spectogram for further (vocoder) processing in out_path.", - default=False, - ) - parser.add_argument( - "--reference_wav", - type=str, - help="Reference wav file to convert in the voice of the speaker_idx or speaker_wav", - default=None, - ) - parser.add_argument( - "--reference_speaker_idx", - type=str, - help="speaker ID of the reference_wav speaker (If not provided the embedding will be computed using the Speaker Encoder).", - default=None, - ) - parser.add_argument( - "--progress_bar", - type=str2bool, - help="If true shows a progress bar for the model download. Defaults to True", - default=True, - ) - - # voice conversion args - parser.add_argument( - "--source_wav", - type=str, - default=None, - help="Original audio file to convert in the voice of the target_wav", - ) - parser.add_argument( - "--target_wav", - type=str, - default=None, - help="Target audio file to convert in the voice of the source_wav", - ) - - parser.add_argument( - "--voice_dir", - type=str, - default=None, - help="Voice dir for tortoise model", - ) - - args = parser.parse_args() - - # print the description if either text or list_models is not set - check_args = [ - args.text, - args.list_models, - args.list_speaker_idxs, - args.list_language_idxs, - args.reference_wav, - args.model_info_by_idx, - args.model_info_by_name, - args.source_wav, - args.target_wav, - ] - if not any(check_args): - parser.parse_args(["-h"]) - - pipe_out = sys.stdout if args.pipe_out else None - - with contextlib.redirect_stdout(None if args.pipe_out else sys.stdout): - # Late-import to make things load faster - from TTS.api import TTS - from TTS.utils.manage import ModelManager - from TTS.utils.synthesizer import Synthesizer - - # load model manager - path = Path(__file__).parent / "../.models.json" - manager = ModelManager(path, progress_bar=args.progress_bar) - api = TTS() - - tts_path = None - tts_config_path = None - speakers_file_path = None - language_ids_file_path = None - vocoder_path = None - vocoder_config_path = None - encoder_path = None - encoder_config_path = None - vc_path = None - vc_config_path = None - model_dir = None - - # CASE1 #list : list pre-trained TTS models - if args.list_models: - manager.add_cs_api_models(api.list_models()) - manager.list_models() - sys.exit() - - # CASE2 #info : model info for pre-trained TTS models - if args.model_info_by_idx: - model_query = args.model_info_by_idx - manager.model_info_by_idx(model_query) - sys.exit() - - if args.model_info_by_name: - model_query_full_name = args.model_info_by_name - manager.model_info_by_full_name(model_query_full_name) - sys.exit() - - # CASE3: TTS with coqui studio models - if "coqui_studio" in args.model_name: - print(" > Using 🐸Coqui Studio model: ", args.model_name) - api = TTS(model_name=args.model_name, cs_api_model=args.cs_model) - api.tts_to_file( - text=args.text, - emotion=args.emotion, - file_path=args.out_path, - language=args.language, - speed=args.speed, - pipe_out=pipe_out, - ) - print(" > Saving output to ", args.out_path) - return - - # CASE4: load pre-trained model paths - if args.model_name is not None and not args.model_path: - model_path, config_path, model_item = manager.download_model(args.model_name) - # tts model - if model_item["model_type"] == "tts_models": - tts_path = model_path - tts_config_path = config_path - if "default_vocoder" in model_item: - args.vocoder_name = ( - model_item["default_vocoder"] if args.vocoder_name is None else args.vocoder_name - ) - - # voice conversion model - if model_item["model_type"] == "voice_conversion_models": - vc_path = model_path - vc_config_path = config_path - - # tts model with multiple files to be loaded from the directory path - if model_item.get("author", None) == "fairseq" or isinstance(model_item["model_url"], list): - model_dir = model_path - tts_path = None - tts_config_path = None - args.vocoder_name = None - - # load vocoder - if args.vocoder_name is not None and not args.vocoder_path: - vocoder_path, vocoder_config_path, _ = manager.download_model(args.vocoder_name) - - # CASE5: set custom model paths - if args.model_path is not None: - tts_path = args.model_path - tts_config_path = args.config_path - speakers_file_path = args.speakers_file_path - language_ids_file_path = args.language_ids_file_path - - if args.vocoder_path is not None: - vocoder_path = args.vocoder_path - vocoder_config_path = args.vocoder_config_path - - if args.encoder_path is not None: - encoder_path = args.encoder_path - encoder_config_path = args.encoder_config_path - - device = args.device - if args.use_cuda: - device = "cuda" - - # load models - synthesizer = Synthesizer( - tts_path, - tts_config_path, - speakers_file_path, - language_ids_file_path, - vocoder_path, - vocoder_config_path, - encoder_path, - encoder_config_path, - vc_path, - vc_config_path, - model_dir, - args.voice_dir, - ).to(device) - - # query speaker ids of a multi-speaker model. - if args.list_speaker_idxs: - print( - " > Available speaker ids: (Set --speaker_idx flag to one of these values to use the multi-speaker model." - ) - print(synthesizer.tts_model.speaker_manager.name_to_id) - return - - # query langauge ids of a multi-lingual model. - if args.list_language_idxs: - print( - " > Available language ids: (Set --language_idx flag to one of these values to use the multi-lingual model." - ) - print(synthesizer.tts_model.language_manager.name_to_id) - return - - # check the arguments against a multi-speaker model. - if synthesizer.tts_speakers_file and (not args.speaker_idx and not args.speaker_wav): - print( - " [!] Looks like you use a multi-speaker model. Define `--speaker_idx` to " - "select the target speaker. You can list the available speakers for this model by `--list_speaker_idxs`." - ) - return - - # RUN THE SYNTHESIS - if args.text: - print(" > Text: {}".format(args.text)) - - # kick it - if tts_path is not None: - wav = synthesizer.tts( - args.text, - speaker_name=args.speaker_idx, - language_name=args.language_idx, - speaker_wav=args.speaker_wav, - reference_wav=args.reference_wav, - style_wav=args.capacitron_style_wav, - style_text=args.capacitron_style_text, - reference_speaker_name=args.reference_speaker_idx, - ) - elif vc_path is not None: - wav = synthesizer.voice_conversion( - source_wav=args.source_wav, - target_wav=args.target_wav, - ) - elif model_dir is not None: - wav = synthesizer.tts( - args.text, speaker_name=args.speaker_idx, language_name=args.language_idx, speaker_wav=args.speaker_wav - ) - - # save the results - print(" > Saving output to {}".format(args.out_path)) - synthesizer.save_wav(wav, args.out_path, pipe_out=pipe_out) - - -if __name__ == "__main__": - main() diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_losses.py b/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_losses.py deleted file mode 100644 index 2a35aa2e3717ee7332e1a3926736971c3c97a090..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_losses.py +++ /dev/null @@ -1,92 +0,0 @@ -import os - -import torch - -from tests import get_tests_input_path, get_tests_output_path, get_tests_path -from TTS.config import BaseAudioConfig -from TTS.utils.audio import AudioProcessor -from TTS.vocoder.layers.losses import MelganFeatureLoss, MultiScaleSTFTLoss, STFTLoss, TorchSTFT - -TESTS_PATH = get_tests_path() - -OUT_PATH = os.path.join(get_tests_output_path(), "audio_tests") -os.makedirs(OUT_PATH, exist_ok=True) - -WAV_FILE = os.path.join(get_tests_input_path(), "example_1.wav") - -ap = AudioProcessor(**BaseAudioConfig().to_dict()) - - -def test_torch_stft(): - torch_stft = TorchSTFT(ap.fft_size, ap.hop_length, ap.win_length) - # librosa stft - wav = ap.load_wav(WAV_FILE) - M_librosa = abs(ap._stft(wav)) # pylint: disable=protected-access - # torch stft - wav = torch.from_numpy(wav[None, :]).float() - M_torch = torch_stft(wav) - # check the difference b/w librosa and torch outputs - assert (M_librosa - M_torch[0].data.numpy()).max() < 1e-5 - - -def test_stft_loss(): - stft_loss = STFTLoss(ap.fft_size, ap.hop_length, ap.win_length) - wav = ap.load_wav(WAV_FILE) - wav = torch.from_numpy(wav[None, :]).float() - loss_m, loss_sc = stft_loss(wav, wav) - assert loss_m + loss_sc == 0 - loss_m, loss_sc = stft_loss(wav, torch.rand_like(wav)) - assert loss_sc < 1.0 - assert loss_m + loss_sc > 0 - - -def test_multiscale_stft_loss(): - stft_loss = MultiScaleSTFTLoss( - [ap.fft_size // 2, ap.fft_size, ap.fft_size * 2], - [ap.hop_length // 2, ap.hop_length, ap.hop_length * 2], - [ap.win_length // 2, ap.win_length, ap.win_length * 2], - ) - wav = ap.load_wav(WAV_FILE) - wav = torch.from_numpy(wav[None, :]).float() - loss_m, loss_sc = stft_loss(wav, wav) - assert loss_m + loss_sc == 0 - loss_m, loss_sc = stft_loss(wav, torch.rand_like(wav)) - assert loss_sc < 1.0 - assert loss_m + loss_sc > 0 - - -def test_melgan_feature_loss(): - feats_real = [] - feats_fake = [] - - # if all the features are different. - for _ in range(5): # different scales - scale_feats_real = [] - scale_feats_fake = [] - for _ in range(4): # different layers - scale_feats_real.append(torch.rand([3, 5, 7])) - scale_feats_fake.append(torch.rand([3, 5, 7])) - feats_real.append(scale_feats_real) - feats_fake.append(scale_feats_fake) - - loss_func = MelganFeatureLoss() - loss = loss_func(feats_fake, feats_real) - assert loss.item() <= 1.0 - - feats_real = [] - feats_fake = [] - - # if all the features are the same - for _ in range(5): # different scales - scale_feats_real = [] - scale_feats_fake = [] - for _ in range(4): # different layers - tensor = torch.rand([3, 5, 7]) - scale_feats_real.append(tensor) - scale_feats_fake.append(tensor) - feats_real.append(scale_feats_real) - feats_fake.append(scale_feats_fake) - - loss_func = MelganFeatureLoss() - loss = loss_func(feats_fake, feats_real) - assert loss.item() == 0 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/hf_byte_bpe.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/hf_byte_bpe.py deleted file mode 100644 index c508578d41bf6b7ce0a847e0797d71b19beb393d..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/hf_byte_bpe.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass -from fairseq import file_utils - - -@dataclass -class HuggingFaceByteLevelBPEConfig(FairseqDataclass): - bpe_merges: str = field(default="???", metadata={"help": "path to merges.txt"}) - bpe_vocab: str = field(default="???", metadata={"help": "path to vocab.json"}) - bpe_add_prefix_space: bool = field( - default=False, metadata={"help": "add prefix space before encoding"} - ) - - -@register_bpe("hf_byte_bpe", dataclass=HuggingFaceByteLevelBPEConfig) -class HuggingFaceByteLevelBPE(object): - def __init__(self, cfg): - try: - from tokenizers import ByteLevelBPETokenizer - except ImportError: - raise ImportError( - "Please install huggingface/tokenizers with: " "pip install tokenizers" - ) - - bpe_vocab = file_utils.cached_path(cfg.bpe_vocab) - bpe_merges = file_utils.cached_path(cfg.bpe_merges) - - self.bpe = ByteLevelBPETokenizer( - bpe_vocab, - bpe_merges, - add_prefix_space=cfg.bpe_add_prefix_space, - ) - - def encode(self, x: str) -> str: - return " ".join(map(str, self.bpe.encode(x).ids)) - - def decode(self, x: str) -> str: - return self.bpe.decode( - [int(tok) if tok not in {"", ""} else tok for tok in x.split()] - ) - - def is_beginning_of_word(self, x: str) -> bool: - return self.decode(x).startswith(" ") diff --git a/spaces/ashhadahsan/summarizer-space/utils/openllmapi/__init__.py b/spaces/ashhadahsan/summarizer-space/utils/openllmapi/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ashpepel/ashpepel/Dockerfile b/spaces/ashpepel/ashpepel/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/ashpepel/ashpepel/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/awacke1/Streamlit_Plotly_Graph_Objects/app.py b/spaces/awacke1/Streamlit_Plotly_Graph_Objects/app.py deleted file mode 100644 index 30f030e694d524697a805e667cfe7c4557fde0c4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit_Plotly_Graph_Objects/app.py +++ /dev/null @@ -1,66 +0,0 @@ -import pandas as pd -import plotly.express as px -import streamlit as st -import jsonlines - -st.markdown(""" -| 📝 #Definition | 📋 Data Fields | -| --- | --- | -| 🤝 asking for more help or #treatment | 📄 Patient info, Referral details | -| 💼 about a patient's health #problem or #limits | 📄 Patient info, Health #problem details | -| 💊 allowing medicine | 📄 Patient info, #Medicine #details | -| 🔎 explaining a #patient's health #problem | 📄 Patient info, Health #problem details | -| 🚑 plan for getting better | 📄 Patient info, #Treatment details | -| 🏥 patient needs surgery | 📄 Patient info, #Surgery details | -| 🏃 patient can do activities | 📄 Patient info, #Activity details | -| 📅 reminding about appointments | 📄 Patient info, #Appointment details | -| ♿ patient's disability | 📄 Patient info, #Disability details | -| 🍎 teaching about health | 📄 Patient info, #Education topic | -""") -# Create a DataFrame with CPT codes, procedures, and expected costs -import pandas as pd -import plotly.graph_objects as go -import streamlit as st -import jsonlines -import base64 -from datetime import datetime - -# Create a DataFrame with Code types, values, descriptions, and expected costs -data = { - 'Code Type': ['CPT', 'SNOMED', 'RXNORM', 'DEA', 'LOINC', 'ORI', 'ORU', 'CCD'], - 'Code Value': ['99201', 'A-12345', 'R-12345', 'D-12345', 'L-12345', 'O-12345', 'U-12345', 'C-12345'], - 'Code Description': ['Office/Outpatient Visit', 'Inpatient Consultation', 'Initial Hospital Care', 'Subsequent Hospital Care', 'Critical Care Services', 'Procedure 6', 'Procedure 7', 'Procedure 8'], - 'Expected Cost': [100, 200, 150, 250, 300, 350, 400, 450] -} -df = pd.DataFrame(data) - -# Create a sunburst plot with Plotly -fig = go.Figure(go.Sunburst( - labels=df['Code Type'], - parents=['']*len(df), - values=df['Expected Cost'], - text=df['Code Description'], - hoverinfo="label+value+text", - branchvalues="total", -)) - -fig.update_layout(margin=dict(t=0, l=0, r=0, b=0)) - -# Display the sunburst plot in Streamlit -st.plotly_chart(fig) - -# Save DataFrame to JSONL file -timestamp = datetime.now().strftime("%Y%m%d%H%M%S") -filename = f"output_{timestamp}.jsonl" -with jsonlines.open(filename, mode='w') as writer: - writer.write(df.to_dict(orient='records')) - -# Function to create a download link -def create_download_link(filename): - with open(filename, 'r') as file: - data = file.read() - b64 = base64.b64encode(data.encode()).decode() - return f'Download data as JSONL' - -# Display a link to download the JSONL file -st.markdown(create_download_link(filename), unsafe_allow_html=True) diff --git a/spaces/ayapoooooo123/Balloon_Diffusion/app.py b/spaces/ayapoooooo123/Balloon_Diffusion/app.py deleted file mode 100644 index 18e39130b516fec02c8a3dbf0d224ad6498cba75..0000000000000000000000000000000000000000 --- a/spaces/ayapoooooo123/Balloon_Diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Fictiverse/Stable_Diffusion_BalloonArt_Model").launch() \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/src/objects/Skeleton.js b/spaces/banana-projects/web3d/node_modules/three/src/objects/Skeleton.js deleted file mode 100644 index 8764e93cecd9b8330ea225911a21a77b92ba8b3d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/objects/Skeleton.js +++ /dev/null @@ -1,178 +0,0 @@ -import { Matrix4 } from '../math/Matrix4.js'; - -/** - * @author mikael emtinger / http://gomo.se/ - * @author alteredq / http://alteredqualia.com/ - * @author michael guerrero / http://realitymeltdown.com - * @author ikerr / http://verold.com - */ - -function Skeleton( bones, boneInverses ) { - - // copy the bone array - - bones = bones || []; - - this.bones = bones.slice( 0 ); - this.boneMatrices = new Float32Array( this.bones.length * 16 ); - - // use the supplied bone inverses or calculate the inverses - - if ( boneInverses === undefined ) { - - this.calculateInverses(); - - } else { - - if ( this.bones.length === boneInverses.length ) { - - this.boneInverses = boneInverses.slice( 0 ); - - } else { - - console.warn( 'THREE.Skeleton boneInverses is the wrong length.' ); - - this.boneInverses = []; - - for ( var i = 0, il = this.bones.length; i < il; i ++ ) { - - this.boneInverses.push( new Matrix4() ); - - } - - } - - } - -} - -Object.assign( Skeleton.prototype, { - - calculateInverses: function () { - - this.boneInverses = []; - - for ( var i = 0, il = this.bones.length; i < il; i ++ ) { - - var inverse = new Matrix4(); - - if ( this.bones[ i ] ) { - - inverse.getInverse( this.bones[ i ].matrixWorld ); - - } - - this.boneInverses.push( inverse ); - - } - - }, - - pose: function () { - - var bone, i, il; - - // recover the bind-time world matrices - - for ( i = 0, il = this.bones.length; i < il; i ++ ) { - - bone = this.bones[ i ]; - - if ( bone ) { - - bone.matrixWorld.getInverse( this.boneInverses[ i ] ); - - } - - } - - // compute the local matrices, positions, rotations and scales - - for ( i = 0, il = this.bones.length; i < il; i ++ ) { - - bone = this.bones[ i ]; - - if ( bone ) { - - if ( bone.parent && bone.parent.isBone ) { - - bone.matrix.getInverse( bone.parent.matrixWorld ); - bone.matrix.multiply( bone.matrixWorld ); - - } else { - - bone.matrix.copy( bone.matrixWorld ); - - } - - bone.matrix.decompose( bone.position, bone.quaternion, bone.scale ); - - } - - } - - }, - - update: ( function () { - - var offsetMatrix = new Matrix4(); - var identityMatrix = new Matrix4(); - - return function update() { - - var bones = this.bones; - var boneInverses = this.boneInverses; - var boneMatrices = this.boneMatrices; - var boneTexture = this.boneTexture; - - // flatten bone matrices to array - - for ( var i = 0, il = bones.length; i < il; i ++ ) { - - // compute the offset between the current and the original transform - - var matrix = bones[ i ] ? bones[ i ].matrixWorld : identityMatrix; - - offsetMatrix.multiplyMatrices( matrix, boneInverses[ i ] ); - offsetMatrix.toArray( boneMatrices, i * 16 ); - - } - - if ( boneTexture !== undefined ) { - - boneTexture.needsUpdate = true; - - } - - }; - - } )(), - - clone: function () { - - return new Skeleton( this.bones, this.boneInverses ); - - }, - - getBoneByName: function ( name ) { - - for ( var i = 0, il = this.bones.length; i < il; i ++ ) { - - var bone = this.bones[ i ]; - - if ( bone.name === name ) { - - return bone; - - } - - } - - return undefined; - - } - -} ); - - -export { Skeleton }; diff --git a/spaces/barani/ControlNet/app_shuffle.py b/spaces/barani/ControlNet/app_shuffle.py deleted file mode 100644 index f863c95a6c263eeb55fee7c8502d79b404771a3a..0000000000000000000000000000000000000000 --- a/spaces/barani/ControlNet/app_shuffle.py +++ /dev/null @@ -1,98 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['ContentShuffle', 'None'], - type='value', - value='ContentShuffle') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='content-shuffle', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='shuffle') - demo = create_demo(model.process_shuffle) - demo.queue().launch() diff --git a/spaces/beingpraveen/streamlit_text_to_sql/README.md b/spaces/beingpraveen/streamlit_text_to_sql/README.md deleted file mode 100644 index 5fc07847ba158ca2de74f435475cca9419e3b7a0..0000000000000000000000000000000000000000 --- a/spaces/beingpraveen/streamlit_text_to_sql/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Streamlit_text_to_sql -emoji: 📚 -colorFrom: gray -colorTo: green -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/bguberfain/Detic/CONTRIBUTING.md b/spaces/bguberfain/Detic/CONTRIBUTING.md deleted file mode 100644 index 282a20270bd73506b08ecad5af9cbf01147471c6..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/CONTRIBUTING.md +++ /dev/null @@ -1,39 +0,0 @@ -# Contributing to Detic -We want to make contributing to this project as easy and transparent as -possible. - -## Our Development Process -Minor changes and improvements will be released on an ongoing basis. Larger changes (e.g., changesets implementing a new paper) will be released on a more periodic basis. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## Coding Style -* 4 spaces for indentation rather than tabs -* 80 character line length -* PEP8 formatting following [Black](https://black.readthedocs.io/en/stable/) - -## License -By contributing to Detic, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/upscaler.py b/spaces/bigjoker/stable-diffusion-webui/modules/upscaler.py deleted file mode 100644 index e2eaa7308af0091b6e8f407e889b2e446679e149..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/upscaler.py +++ /dev/null @@ -1,145 +0,0 @@ -import os -from abc import abstractmethod - -import PIL -import numpy as np -import torch -from PIL import Image - -import modules.shared -from modules import modelloader, shared - -LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS) -NEAREST = (Image.Resampling.NEAREST if hasattr(Image, 'Resampling') else Image.NEAREST) - - -class Upscaler: - name = None - model_path = None - model_name = None - model_url = None - enable = True - filter = None - model = None - user_path = None - scalers: [] - tile = True - - def __init__(self, create_dirs=False): - self.mod_pad_h = None - self.tile_size = modules.shared.opts.ESRGAN_tile - self.tile_pad = modules.shared.opts.ESRGAN_tile_overlap - self.device = modules.shared.device - self.img = None - self.output = None - self.scale = 1 - self.half = not modules.shared.cmd_opts.no_half - self.pre_pad = 0 - self.mod_scale = None - - if self.model_path is None and self.name: - self.model_path = os.path.join(shared.models_path, self.name) - if self.model_path and create_dirs: - os.makedirs(self.model_path, exist_ok=True) - - try: - import cv2 - self.can_tile = True - except: - pass - - @abstractmethod - def do_upscale(self, img: PIL.Image, selected_model: str): - return img - - def upscale(self, img: PIL.Image, scale, selected_model: str = None): - self.scale = scale - dest_w = int(img.width * scale) - dest_h = int(img.height * scale) - - for i in range(3): - shape = (img.width, img.height) - - img = self.do_upscale(img, selected_model) - - if shape == (img.width, img.height): - break - - if img.width >= dest_w and img.height >= dest_h: - break - - if img.width != dest_w or img.height != dest_h: - img = img.resize((int(dest_w), int(dest_h)), resample=LANCZOS) - - return img - - @abstractmethod - def load_model(self, path: str): - pass - - def find_models(self, ext_filter=None) -> list: - return modelloader.load_models(model_path=self.model_path, model_url=self.model_url, command_path=self.user_path) - - def update_status(self, prompt): - print(f"\nextras: {prompt}", file=shared.progress_print_out) - - -class UpscalerData: - name = None - data_path = None - scale: int = 4 - scaler: Upscaler = None - model: None - - def __init__(self, name: str, path: str, upscaler: Upscaler = None, scale: int = 4, model=None): - self.name = name - self.data_path = path - self.local_data_path = path - self.scaler = upscaler - self.scale = scale - self.model = model - - -class UpscalerNone(Upscaler): - name = "None" - scalers = [] - - def load_model(self, path): - pass - - def do_upscale(self, img, selected_model=None): - return img - - def __init__(self, dirname=None): - super().__init__(False) - self.scalers = [UpscalerData("None", None, self)] - - -class UpscalerLanczos(Upscaler): - scalers = [] - - def do_upscale(self, img, selected_model=None): - return img.resize((int(img.width * self.scale), int(img.height * self.scale)), resample=LANCZOS) - - def load_model(self, _): - pass - - def __init__(self, dirname=None): - super().__init__(False) - self.name = "Lanczos" - self.scalers = [UpscalerData("Lanczos", None, self)] - - -class UpscalerNearest(Upscaler): - scalers = [] - - def do_upscale(self, img, selected_model=None): - return img.resize((int(img.width * self.scale), int(img.height * self.scale)), resample=NEAREST) - - def load_model(self, _): - pass - - def __init__(self, dirname=None): - super().__init__(False) - self.name = "Nearest" - self.scalers = [UpscalerData("Nearest", None, self)] diff --git a/spaces/bigjoker/stable-diffusion-webui/scripts/img2imgalt.py b/spaces/bigjoker/stable-diffusion-webui/scripts/img2imgalt.py deleted file mode 100644 index 65b61533929a018f0cb97a89266154bf569cd40e..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/scripts/img2imgalt.py +++ /dev/null @@ -1,216 +0,0 @@ -from collections import namedtuple - -import numpy as np -from tqdm import trange - -import modules.scripts as scripts -import gradio as gr - -from modules import processing, shared, sd_samplers, prompt_parser, sd_samplers_common -from modules.processing import Processed -from modules.shared import opts, cmd_opts, state - -import torch -import k_diffusion as K - -from PIL import Image -from torch import autocast -from einops import rearrange, repeat - - -def find_noise_for_image(p, cond, uncond, cfg_scale, steps): - x = p.init_latent - - s_in = x.new_ones([x.shape[0]]) - dnw = K.external.CompVisDenoiser(shared.sd_model) - sigmas = dnw.get_sigmas(steps).flip(0) - - shared.state.sampling_steps = steps - - for i in trange(1, len(sigmas)): - shared.state.sampling_step += 1 - - x_in = torch.cat([x] * 2) - sigma_in = torch.cat([sigmas[i] * s_in] * 2) - cond_in = torch.cat([uncond, cond]) - - image_conditioning = torch.cat([p.image_conditioning] * 2) - cond_in = {"c_concat": [image_conditioning], "c_crossattn": [cond_in]} - - c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)] - t = dnw.sigma_to_t(sigma_in) - - eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in) - denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2) - - denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale - - d = (x - denoised) / sigmas[i] - dt = sigmas[i] - sigmas[i - 1] - - x = x + d * dt - - sd_samplers_common.store_latent(x) - - # This shouldn't be necessary, but solved some VRAM issues - del x_in, sigma_in, cond_in, c_out, c_in, t, - del eps, denoised_uncond, denoised_cond, denoised, d, dt - - shared.state.nextjob() - - return x / x.std() - - -Cached = namedtuple("Cached", ["noise", "cfg_scale", "steps", "latent", "original_prompt", "original_negative_prompt", "sigma_adjustment"]) - - -# Based on changes suggested by briansemrau in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/736 -def find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg_scale, steps): - x = p.init_latent - - s_in = x.new_ones([x.shape[0]]) - dnw = K.external.CompVisDenoiser(shared.sd_model) - sigmas = dnw.get_sigmas(steps).flip(0) - - shared.state.sampling_steps = steps - - for i in trange(1, len(sigmas)): - shared.state.sampling_step += 1 - - x_in = torch.cat([x] * 2) - sigma_in = torch.cat([sigmas[i - 1] * s_in] * 2) - cond_in = torch.cat([uncond, cond]) - - image_conditioning = torch.cat([p.image_conditioning] * 2) - cond_in = {"c_concat": [image_conditioning], "c_crossattn": [cond_in]} - - c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)] - - if i == 1: - t = dnw.sigma_to_t(torch.cat([sigmas[i] * s_in] * 2)) - else: - t = dnw.sigma_to_t(sigma_in) - - eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in) - denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2) - - denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale - - if i == 1: - d = (x - denoised) / (2 * sigmas[i]) - else: - d = (x - denoised) / sigmas[i - 1] - - dt = sigmas[i] - sigmas[i - 1] - x = x + d * dt - - sd_samplers_common.store_latent(x) - - # This shouldn't be necessary, but solved some VRAM issues - del x_in, sigma_in, cond_in, c_out, c_in, t, - del eps, denoised_uncond, denoised_cond, denoised, d, dt - - shared.state.nextjob() - - return x / sigmas[-1] - - -class Script(scripts.Script): - def __init__(self): - self.cache = None - - def title(self): - return "img2img alternative test" - - def show(self, is_img2img): - return is_img2img - - def ui(self, is_img2img): - info = gr.Markdown(''' - * `CFG Scale` should be 2 or lower. - ''') - - override_sampler = gr.Checkbox(label="Override `Sampling method` to Euler?(this method is built for it)", value=True, elem_id=self.elem_id("override_sampler")) - - override_prompt = gr.Checkbox(label="Override `prompt` to the same value as `original prompt`?(and `negative prompt`)", value=True, elem_id=self.elem_id("override_prompt")) - original_prompt = gr.Textbox(label="Original prompt", lines=1, elem_id=self.elem_id("original_prompt")) - original_negative_prompt = gr.Textbox(label="Original negative prompt", lines=1, elem_id=self.elem_id("original_negative_prompt")) - - override_steps = gr.Checkbox(label="Override `Sampling Steps` to the same value as `Decode steps`?", value=True, elem_id=self.elem_id("override_steps")) - st = gr.Slider(label="Decode steps", minimum=1, maximum=150, step=1, value=50, elem_id=self.elem_id("st")) - - override_strength = gr.Checkbox(label="Override `Denoising strength` to 1?", value=True, elem_id=self.elem_id("override_strength")) - - cfg = gr.Slider(label="Decode CFG scale", minimum=0.0, maximum=15.0, step=0.1, value=1.0, elem_id=self.elem_id("cfg")) - randomness = gr.Slider(label="Randomness", minimum=0.0, maximum=1.0, step=0.01, value=0.0, elem_id=self.elem_id("randomness")) - sigma_adjustment = gr.Checkbox(label="Sigma adjustment for finding noise for image", value=False, elem_id=self.elem_id("sigma_adjustment")) - - return [ - info, - override_sampler, - override_prompt, original_prompt, original_negative_prompt, - override_steps, st, - override_strength, - cfg, randomness, sigma_adjustment, - ] - - def run(self, p, _, override_sampler, override_prompt, original_prompt, original_negative_prompt, override_steps, st, override_strength, cfg, randomness, sigma_adjustment): - # Override - if override_sampler: - p.sampler_name = "Euler" - if override_prompt: - p.prompt = original_prompt - p.negative_prompt = original_negative_prompt - if override_steps: - p.steps = st - if override_strength: - p.denoising_strength = 1.0 - - def sample_extra(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, prompts): - lat = (p.init_latent.cpu().numpy() * 10).astype(int) - - same_params = self.cache is not None and self.cache.cfg_scale == cfg and self.cache.steps == st \ - and self.cache.original_prompt == original_prompt \ - and self.cache.original_negative_prompt == original_negative_prompt \ - and self.cache.sigma_adjustment == sigma_adjustment - same_everything = same_params and self.cache.latent.shape == lat.shape and np.abs(self.cache.latent-lat).sum() < 100 - - if same_everything: - rec_noise = self.cache.noise - else: - shared.state.job_count += 1 - cond = p.sd_model.get_learned_conditioning(p.batch_size * [original_prompt]) - uncond = p.sd_model.get_learned_conditioning(p.batch_size * [original_negative_prompt]) - if sigma_adjustment: - rec_noise = find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg, st) - else: - rec_noise = find_noise_for_image(p, cond, uncond, cfg, st) - self.cache = Cached(rec_noise, cfg, st, lat, original_prompt, original_negative_prompt, sigma_adjustment) - - rand_noise = processing.create_random_tensors(p.init_latent.shape[1:], seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, seed_resize_from_h=p.seed_resize_from_h, seed_resize_from_w=p.seed_resize_from_w, p=p) - - combined_noise = ((1 - randomness) * rec_noise + randomness * rand_noise) / ((randomness**2 + (1-randomness)**2) ** 0.5) - - sampler = sd_samplers.create_sampler(p.sampler_name, p.sd_model) - - sigmas = sampler.model_wrap.get_sigmas(p.steps) - - noise_dt = combined_noise - (p.init_latent / sigmas[0]) - - p.seed = p.seed + 1 - - return sampler.sample_img2img(p, p.init_latent, noise_dt, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning) - - p.sample = sample_extra - - p.extra_generation_params["Decode prompt"] = original_prompt - p.extra_generation_params["Decode negative prompt"] = original_negative_prompt - p.extra_generation_params["Decode CFG scale"] = cfg - p.extra_generation_params["Decode steps"] = st - p.extra_generation_params["Randomness"] = randomness - p.extra_generation_params["Sigma Adjustment"] = sigma_adjustment - - processed = processing.process_images(p) - - return processed - diff --git a/spaces/bioriAsaeru/text-to-voice/Captain Red Hot Chili Peppers.md b/spaces/bioriAsaeru/text-to-voice/Captain Red Hot Chili Peppers.md deleted file mode 100644 index 4d6963fc5c185fb026281ce67d626de08ac00b2d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Captain Red Hot Chili Peppers.md +++ /dev/null @@ -1,10 +0,0 @@ -
              -

              [url= -hot-chili-peppers/2019/derwent-entertainment-centre-hobart-australia-4b956312.html][img] -image-v1?id=4b956312[/img][/url][url= =4b956312&step=song]Edit this setlist[/url] | [url= -hot-chili-peppers-13d68969.html]More Red Hot Chili Peppers setlists[/url]

              -

              A study of more than 16,000 people in the United States revealed that individuals who consumed red chili peppers had a lower risk of death from all causes over an average of 18 years than those who did not eat the spicy food.

              -

              captain red hot chili peppers


              DOWNLOAD ✶✶✶ https://urloso.com/2uyQiT



              -

              While the researchers are unable the pinpoint the precise mechanisms by which red chili peppers might extend lifespan, the team says that it is likely down to capsaicin, which activities transient receptor potential (TRP) channels.

              -

              At Beer & Wine Hobby when we find something we absolutely love we like to share it with our family friends and customers. When we stumbled on Captain Mowatt`s buried treasure we knew we needed to share! His unique BBQ hot sauce and spicy mustard is hand crafted in three gallon batches. Each one is bursting with flavor and just the right amount of heat. Since 2002 Captain Mowatt has been hard at work making his fiery sauces. The Captain so generously believes in sharing his booty - every bottle is 8+ oz. unlike other sauces!Enter the `Heart of Tartness` where pucker gives way to perfection. Red chili peppers cranberries and a bracing tang of orange make this hometown favorite perfect on anything and everything. Boston Red Hot Sauce will knock other sauces out of the park!

              -

            Filed Under: Afternoon Links, depressed darth vader, j.k. simmons, terminator: genesis, lorde, Prince, Zooey Deschanel, New Girl, J.K. Rowling, Harry Potter, Drake, kristen bell, frozen, le1f, Red Hot Chili Peppers, captain america 3, batman vs. superman, chadwick boseman, get on up, james brown, tate taylor, hannibal buress, Louis C.K.

            -

            Je bent ongetwijfeld opgegroeid met de muziek van de Red Hot Chili Peppers. Misschien draaiden je ouders de platen grijs toen je opgroeide of je bent zelf op latere leeftijd de muziek gaan waarderen. Een Red Hot Chili Peppers LP kopen is in alle gevallen een uitstekende keuze. Je zorgt hierdoor voor een leuke aanvulling op je bestaande LP collectie en luistert naar tal van bekende en minder bekende nummers. Naar wat voor LP Red Hot Chili Peppers ben je op dit moment op zoek?\n"}};var elementorFrontendConfig = "environmentMode":"edit":false,"wpPreview":false,"isScriptDebug":false,"i18n":"shareOnFacebook":"Deel via Facebook","shareOnTwitter":"Deel via Twitter","pinIt":"Pin dit","download":"Downloaden","downloadImage":"Download afbeelding","fullscreen":"Volledig scherm","zoom":"Zoom","share":"Delen","playVideo":"Video afspelen","previous":"Vorige","next":"Volgende","close":"Sluiten","is_rtl":false,"breakpoints":"xs":0,"sm":480,"md":768,"lg":1025,"xl":1440,"xxl":1600,"responsive":"breakpoints":"mobile":"label":"Mobiel","value":767,"default_value":767,"direction":"max","is_enabled":true,"mobile_extra":"label":"Mobiel Extra","value":880,"default_value":880,"direction":"max","is_enabled":false,"tablet":"label":"Tablet","value":1024,"default_value":1024,"direction":"max","is_enabled":true,"tablet_extra":"label":"Tablet Extra","value":1200,"default_value":1200,"direction":"max","is_enabled":false,"laptop":"label":"Laptop","value":1366,"default_value":1366,"direction":"max","is_enabled":false,"widescreen":"label":"Breedbeeld","value":2400,"default_value":2400,"direction":"min","is_enabled":false,"version":"3.4.8","is_static":false,"experimentalFeatures":"e_dom_optimization":true,"a11y_improvements":true,"e_import_export":true,"additional_custom_breakpoints":true,"landing-pages":true,"elements-color-picker":true,"admin-top-bar":true,"urls":"assets":"https:\/\/www.captain-vinyl.com\/wp-content\/plugins\/elementor\/assets\/","settings":"editorPreferences":[],"kit":"active_breakpoints":["viewport_mobile","viewport_tablet"],"global_image_lightbox":"yes","lightbox_enable_counter":"yes","lightbox_enable_fullscreen":"yes","lightbox_enable_zoom":"yes","lightbox_enable_share":"yes","lightbox_title_src":"title","lightbox_description_src":"description","post":"id":0,"title":"Bekijk alle Red Hot Chili Peppers LPs Online bij - Captain Vinyl","excerpt":"Red Hot Chili Peppers LPs\nJe bent ongetwijfeld opgegroeid met de muziek van de Red Hot Chili Peppers. Misschien draaiden je ouders de platen grijs toen je opgroeide of je bent zelf op latere leeftijd de muziek gaan waarderen. Een Red Hot Chili Peppers LP kopen is in alle gevallen een uitstekende keuze. Je zorgt hierdoor voor een leuke aanvulling op je bestaande LP collectie en luistert naar tal van bekende en minder bekende nummers. Naar wat voor LP Red Hot Chili Peppers ben je op dit moment op zoek?\n";/* *//* *//* */jQuery(function($) // Update value on change.jQuery( '.dropdown_layered_nav_artiest' ).on( 'change', function() var slug = jQuery( this ).val();jQuery( ':input[name="filter_artiest"]' ).val( slug );// Submit form on change if standard dropdown.if ( ! jQuery( this ).attr( 'multiple' ) ) jQuery( this ).closest( 'form' ).trigger( 'submit' ););// Use Select2 enhancement if possibleif ( jQuery().selectWoo ) var wc_layered_nav_select = function() jQuery( '.dropdown_layered_nav_artiest' ).selectWoo( placeholder: decodeURIComponent('Kies%20een%20Artiest'),minimumResultsForSearch: 5,width: '100%',allowClear: true,language: noResults: function() return 'Geen overeenkomsten gevonden'; );;wc_layered_nav_select(); );Sign incloseGebruikersnaam of e-mailadres *

            aaccfb2cb3
            -
            -
            \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Crack para Dinesat Radio 9 la solucin definitiva para tu emisora de radio.md b/spaces/bioriAsaeru/text-to-voice/Crack para Dinesat Radio 9 la solucin definitiva para tu emisora de radio.md deleted file mode 100644 index fc3bdcf0ab1d869a623f487bef910dde4edb058f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Crack para Dinesat Radio 9 la solucin definitiva para tu emisora de radio.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Adobe muse amtlib.dll crack


            Downloadhttps://urloso.com/2uyP9B



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/bioriAsaeru/text-to-voice/Drummania V7 Download For Pc.rar Everything You Need to Know About Guitar Freaks Drummania V7.md b/spaces/bioriAsaeru/text-to-voice/Drummania V7 Download For Pc.rar Everything You Need to Know About Guitar Freaks Drummania V7.md deleted file mode 100644 index 35f83b9e8599db25e0ac4f74b1d8783f12dabecc..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Drummania V7 Download For Pc.rar Everything You Need to Know About Guitar Freaks Drummania V7.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Drummania V7 Download For Pc.rar


            Download File ————— https://urloso.com/2uyQlH



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (download Finding Dory (English) !FREE! Full).md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (download Finding Dory (English) !FREE! Full).md deleted file mode 100644 index d7a6f0dfea0023221681f198c2efb6f4c660f356..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (download Finding Dory (English) !FREE! Full).md +++ /dev/null @@ -1,6 +0,0 @@ -

            HD Online Player (download Finding Dory (English) full)


            DOWNLOAD ····· https://urloso.com/2uyPqu



            -
            -2011 Hindi Dubbed Watch Full Movie Online HD Download. ... Download Killing Me Softly Full Movies in Hindi Download Hin Eng 480p in 300MB ... and Software directly from Torrent. online finding nemo 2003 free download HD quality 720p ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/bioriAsaeru/text-to-voice/Jogo Piratas Do Caribe Pc Crack NEW!.md b/spaces/bioriAsaeru/text-to-voice/Jogo Piratas Do Caribe Pc Crack NEW!.md deleted file mode 100644 index 92d7548b2360b057978eead633e12e83d3d08a1d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Jogo Piratas Do Caribe Pc Crack NEW!.md +++ /dev/null @@ -1,10 +0,0 @@ -

            Jogo Piratas Do Caribe Pc Crack


            Download File ---> https://urloso.com/2uyQez



            -
            -Last member of this channel and answer or my work: do . We know it was created in response to some people with this request, but it's not a good idea to create channels like this that offer solutions that you can get on another channel with other questions. -In this case, I don't know what to do with this question, so it will just be deleted. -I sincerely appreciate your help, but I don't plan on keeping this channel going anyway. -Sincerely -I think the question was asked for this reason, but you are the 8a78ff9644
            -
            -
            -

            diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py b/spaces/brjathu/HMR2.0/vendor/detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py deleted file mode 100644 index bdd49a4566d1d0c79d0613c34a8cffd616f74fd2..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py +++ /dev/null @@ -1,152 +0,0 @@ -# An example config to train a mmdetection model using detectron2. - -from ..common.data.coco import dataloader -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.optim import SGD as optimizer -from ..common.train import train -from ..common.data.constants import constants - -from detectron2.modeling.mmdet_wrapper import MMDetDetector -from detectron2.config import LazyCall as L - -model = L(MMDetDetector)( - detector=dict( - type="MaskRCNN", - pretrained="torchvision://resnet50", - backbone=dict( - type="ResNet", - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type="BN", requires_grad=True), - norm_eval=True, - style="pytorch", - ), - neck=dict(type="FPN", in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), - rpn_head=dict( - type="RPNHead", - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type="AnchorGenerator", - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64], - ), - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - roi_head=dict( - type="StandardRoIHead", - bbox_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - bbox_head=dict( - type="Shared2FCBBoxHead", - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[0.1, 0.1, 0.2, 0.2], - ), - reg_class_agnostic=False, - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - mask_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - mask_head=dict( - type="FCNMaskHead", - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict(type="CrossEntropyLoss", use_mask=True, loss_weight=1.0), - ), - ), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False, - ), - allowed_border=-1, - pos_weight=-1, - debug=False, - ), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True, - ), - mask_size=28, - pos_weight=-1, - debug=False, - ), - ), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - score_thr=0.05, - nms=dict(type="nms", iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5, - ), - ), - ), - pixel_mean=constants.imagenet_rgb256_mean, - pixel_std=constants.imagenet_rgb256_std, -) - -dataloader.train.mapper.image_format = "RGB" # torchvision pretrained model -train.init_checkpoint = None # pretrained model is loaded inside backbone diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/tests/unit/test_lights.py b/spaces/brjathu/HMR2.0/vendor/pyrender/tests/unit/test_lights.py deleted file mode 100644 index ffde856b21e8cce9532f0308fcd1c7eb2d1eba90..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/pyrender/tests/unit/test_lights.py +++ /dev/null @@ -1,104 +0,0 @@ -import numpy as np -import pytest - -from pyrender import (DirectionalLight, SpotLight, PointLight, Texture, - PerspectiveCamera, OrthographicCamera) -from pyrender.constants import SHADOW_TEX_SZ - - -def test_directional_light(): - - d = DirectionalLight() - assert d.name is None - assert np.all(d.color == 1.0) - assert d.intensity == 1.0 - - d.name = 'direc' - with pytest.raises(ValueError): - d.color = None - with pytest.raises(TypeError): - d.intensity = None - - d = DirectionalLight(color=[0.0, 0.0, 0.0]) - assert np.all(d.color == 0.0) - - d._generate_shadow_texture() - st = d.shadow_texture - assert isinstance(st, Texture) - assert st.width == st.height == SHADOW_TEX_SZ - - sc = d._get_shadow_camera(scene_scale=5.0) - assert isinstance(sc, OrthographicCamera) - assert sc.xmag == sc.ymag == 5.0 - assert sc.znear == 0.01 * 5.0 - assert sc.zfar == 10 * 5.0 - - -def test_spot_light(): - - s = SpotLight() - assert s.name is None - assert np.all(s.color == 1.0) - assert s.intensity == 1.0 - assert s.innerConeAngle == 0.0 - assert s.outerConeAngle == np.pi / 4.0 - assert s.range is None - - with pytest.raises(ValueError): - s.range = -1.0 - - with pytest.raises(ValueError): - s.range = 0.0 - - with pytest.raises(ValueError): - s.innerConeAngle = -1.0 - - with pytest.raises(ValueError): - s.innerConeAngle = np.pi / 3.0 - - with pytest.raises(ValueError): - s.outerConeAngle = -1.0 - - with pytest.raises(ValueError): - s.outerConeAngle = np.pi - - s.range = 5.0 - s.outerConeAngle = np.pi / 2 - 0.05 - s.innerConeAngle = np.pi / 3 - s.innerConeAngle = 0.0 - s.outerConeAngle = np.pi / 4.0 - - s._generate_shadow_texture() - st = s.shadow_texture - assert isinstance(st, Texture) - assert st.width == st.height == SHADOW_TEX_SZ - - sc = s._get_shadow_camera(scene_scale=5.0) - assert isinstance(sc, PerspectiveCamera) - assert sc.znear == 0.01 * 5.0 - assert sc.zfar == 10 * 5.0 - assert sc.aspectRatio == 1.0 - assert np.allclose(sc.yfov, np.pi / 16.0 * 9.0) # Plus pi / 16 - - -def test_point_light(): - - s = PointLight() - assert s.name is None - assert np.all(s.color == 1.0) - assert s.intensity == 1.0 - assert s.range is None - - with pytest.raises(ValueError): - s.range = -1.0 - - with pytest.raises(ValueError): - s.range = 0.0 - - s.range = 5.0 - - with pytest.raises(NotImplementedError): - s._generate_shadow_texture() - - with pytest.raises(NotImplementedError): - s._get_shadow_camera(scene_scale=5.0) diff --git a/spaces/bugbugbug/vits-uma-genshin-honkai/mel_processing.py b/spaces/bugbugbug/vits-uma-genshin-honkai/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/bugbugbug/vits-uma-genshin-honkai/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/caffeinum/VToonify/vtoonify/model/encoder/readme.md b/spaces/caffeinum/VToonify/vtoonify/model/encoder/readme.md deleted file mode 100644 index 5421bfe3e67b7b6cbd7baf96b741b539d65bb0fd..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/model/encoder/readme.md +++ /dev/null @@ -1,9 +0,0 @@ -# Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation - -## Description -Official Implementation of pSp paper for both training and evaluation. The pSp method extends the StyleGAN model to -allow solving different image-to-image translation problems using its encoder. - -Fork from [https://github.com/eladrich/pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel). - -In VToonify, we modify pSp to accept z+ latent code. diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/abc.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/abc.py deleted file mode 100644 index 44a3bda34665a5e3b67fba9acc1e545a37b16617..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/abc.py +++ /dev/null @@ -1,207 +0,0 @@ -import asyncio -import logging -from abc import ABC, abstractmethod -from collections.abc import Sized -from http.cookies import BaseCookie, Morsel -from typing import ( - TYPE_CHECKING, - Any, - Awaitable, - Callable, - Dict, - Generator, - Iterable, - List, - Optional, - Tuple, -) - -from multidict import CIMultiDict -from yarl import URL - -from .helpers import get_running_loop -from .typedefs import LooseCookies - -if TYPE_CHECKING: # pragma: no cover - from .web_app import Application - from .web_exceptions import HTTPException - from .web_request import BaseRequest, Request - from .web_response import StreamResponse -else: - BaseRequest = Request = Application = StreamResponse = None - HTTPException = None - - -class AbstractRouter(ABC): - def __init__(self) -> None: - self._frozen = False - - def post_init(self, app: Application) -> None: - """Post init stage. - - Not an abstract method for sake of backward compatibility, - but if the router wants to be aware of the application - it can override this. - """ - - @property - def frozen(self) -> bool: - return self._frozen - - def freeze(self) -> None: - """Freeze router.""" - self._frozen = True - - @abstractmethod - async def resolve(self, request: Request) -> "AbstractMatchInfo": - """Return MATCH_INFO for given request""" - - -class AbstractMatchInfo(ABC): - @property # pragma: no branch - @abstractmethod - def handler(self) -> Callable[[Request], Awaitable[StreamResponse]]: - """Execute matched request handler""" - - @property - @abstractmethod - def expect_handler(self) -> Callable[[Request], Awaitable[None]]: - """Expect handler for 100-continue processing""" - - @property # pragma: no branch - @abstractmethod - def http_exception(self) -> Optional[HTTPException]: - """HTTPException instance raised on router's resolving, or None""" - - @abstractmethod # pragma: no branch - def get_info(self) -> Dict[str, Any]: - """Return a dict with additional info useful for introspection""" - - @property # pragma: no branch - @abstractmethod - def apps(self) -> Tuple[Application, ...]: - """Stack of nested applications. - - Top level application is left-most element. - - """ - - @abstractmethod - def add_app(self, app: Application) -> None: - """Add application to the nested apps stack.""" - - @abstractmethod - def freeze(self) -> None: - """Freeze the match info. - - The method is called after route resolution. - - After the call .add_app() is forbidden. - - """ - - -class AbstractView(ABC): - """Abstract class based view.""" - - def __init__(self, request: Request) -> None: - self._request = request - - @property - def request(self) -> Request: - """Request instance.""" - return self._request - - @abstractmethod - def __await__(self) -> Generator[Any, None, StreamResponse]: - """Execute the view handler.""" - - -class AbstractResolver(ABC): - """Abstract DNS resolver.""" - - @abstractmethod - async def resolve(self, host: str, port: int, family: int) -> List[Dict[str, Any]]: - """Return IP address for given hostname""" - - @abstractmethod - async def close(self) -> None: - """Release resolver""" - - -if TYPE_CHECKING: # pragma: no cover - IterableBase = Iterable[Morsel[str]] -else: - IterableBase = Iterable - - -ClearCookiePredicate = Callable[["Morsel[str]"], bool] - - -class AbstractCookieJar(Sized, IterableBase): - """Abstract Cookie Jar.""" - - def __init__(self, *, loop: Optional[asyncio.AbstractEventLoop] = None) -> None: - self._loop = get_running_loop(loop) - - @abstractmethod - def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: - """Clear all cookies if no predicate is passed.""" - - @abstractmethod - def clear_domain(self, domain: str) -> None: - """Clear all cookies for domain and all subdomains.""" - - @abstractmethod - def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: - """Update cookies.""" - - @abstractmethod - def filter_cookies(self, request_url: URL) -> "BaseCookie[str]": - """Return the jar's cookies filtered by their attributes.""" - - -class AbstractStreamWriter(ABC): - """Abstract stream writer.""" - - buffer_size = 0 - output_size = 0 - length: Optional[int] = 0 - - @abstractmethod - async def write(self, chunk: bytes) -> None: - """Write chunk into stream.""" - - @abstractmethod - async def write_eof(self, chunk: bytes = b"") -> None: - """Write last chunk.""" - - @abstractmethod - async def drain(self) -> None: - """Flush the write buffer.""" - - @abstractmethod - def enable_compression(self, encoding: str = "deflate") -> None: - """Enable HTTP body compression""" - - @abstractmethod - def enable_chunking(self) -> None: - """Enable HTTP chunked mode""" - - @abstractmethod - async def write_headers( - self, status_line: str, headers: "CIMultiDict[str]" - ) -> None: - """Write HTTP headers""" - - -class AbstractAccessLogger(ABC): - """Abstract writer to access log.""" - - def __init__(self, logger: logging.Logger, log_format: str) -> None: - self.logger = logger - self.log_format = log_format - - @abstractmethod - def log(self, request: BaseRequest, response: StreamResponse, time: float) -> None: - """Emit log to logger.""" diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/panoptic_fpn.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/panoptic_fpn.py deleted file mode 100644 index b31e1c8dc06913d413ae829426e0625fdd5c2f38..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/panoptic_fpn.py +++ /dev/null @@ -1,269 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from typing import Dict, List -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.structures import ImageList - -from ..postprocessing import detector_postprocess, sem_seg_postprocess -from .build import META_ARCH_REGISTRY -from .rcnn import GeneralizedRCNN -from .semantic_seg import build_sem_seg_head - -__all__ = ["PanopticFPN"] - - -@META_ARCH_REGISTRY.register() -class PanopticFPN(GeneralizedRCNN): - """ - Implement the paper :paper:`PanopticFPN`. - """ - - @configurable - def __init__( - self, - *, - sem_seg_head: nn.Module, - combine_overlap_thresh: float = 0.5, - combine_stuff_area_thresh: float = 4096, - combine_instances_score_thresh: float = 0.5, - **kwargs, - ): - """ - NOTE: this interface is experimental. - - Args: - sem_seg_head: a module for the semantic segmentation head. - combine_overlap_thresh: combine masks into one instances if - they have enough overlap - combine_stuff_area_thresh: ignore stuff areas smaller than this threshold - combine_instances_score_thresh: ignore instances whose score is - smaller than this threshold - - Other arguments are the same as :class:`GeneralizedRCNN`. - """ - super().__init__(**kwargs) - self.sem_seg_head = sem_seg_head - # options when combining instance & semantic outputs - self.combine_overlap_thresh = combine_overlap_thresh - self.combine_stuff_area_thresh = combine_stuff_area_thresh - self.combine_instances_score_thresh = combine_instances_score_thresh - - @classmethod - def from_config(cls, cfg): - ret = super().from_config(cfg) - ret.update( - { - "combine_overlap_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH, - "combine_stuff_area_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT, - "combine_instances_score_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH, # noqa - } - ) - ret["sem_seg_head"] = build_sem_seg_head(cfg, ret["backbone"].output_shape()) - logger = logging.getLogger(__name__) - if not cfg.MODEL.PANOPTIC_FPN.COMBINE.ENABLED: - logger.warning( - "PANOPTIC_FPN.COMBINED.ENABLED is no longer used. " - " model.inference(do_postprocess=) should be used to toggle postprocessing." - ) - if cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT != 1.0: - w = cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT - logger.warning( - "PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT should be replaced by weights on each ROI head." - ) - - def update_weight(x): - if isinstance(x, dict): - return {k: v * w for k, v in x.items()} - else: - return x * w - - roi_heads = ret["roi_heads"] - roi_heads.box_predictor.loss_weight = update_weight(roi_heads.box_predictor.loss_weight) - roi_heads.mask_head.loss_weight = update_weight(roi_heads.mask_head.loss_weight) - return ret - - def forward(self, batched_inputs): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper`. - Each item in the list contains the inputs for one image. - - For now, each item in the list is a dict that contains: - - * "image": Tensor, image in (C, H, W) format. - * "instances": Instances - * "sem_seg": semantic segmentation ground truth. - * Other information that's included in the original dicts, such as: - "height", "width" (int): the output resolution of the model, used in inference. - See :meth:`postprocess` for details. - - Returns: - list[dict]: - each dict has the results for one image. The dict contains the following keys: - - * "instances": see :meth:`GeneralizedRCNN.forward` for its format. - * "sem_seg": see :meth:`SemanticSegmentor.forward` for its format. - * "panoptic_seg": See the return value of - :func:`combine_semantic_and_instance_outputs` for its format. - """ - if not self.training: - return self.inference(batched_inputs) - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - - assert "sem_seg" in batched_inputs[0] - gt_sem_seg = [x["sem_seg"].to(self.device) for x in batched_inputs] - gt_sem_seg = ImageList.from_tensors( - gt_sem_seg, - self.backbone.size_divisibility, - self.sem_seg_head.ignore_value, - self.backbone.padding_constraints, - ).tensor - sem_seg_results, sem_seg_losses = self.sem_seg_head(features, gt_sem_seg) - - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) - detector_results, detector_losses = self.roi_heads( - images, features, proposals, gt_instances - ) - - losses = sem_seg_losses - losses.update(proposal_losses) - losses.update(detector_losses) - return losses - - def inference(self, batched_inputs: List[Dict[str, torch.Tensor]], do_postprocess: bool = True): - """ - Run inference on the given inputs. - - Args: - batched_inputs (list[dict]): same as in :meth:`forward` - do_postprocess (bool): whether to apply post-processing on the outputs. - - Returns: - When do_postprocess=True, see docs in :meth:`forward`. - Otherwise, returns a (list[Instances], list[Tensor]) that contains - the raw detector outputs, and raw semantic segmentation outputs. - """ - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - sem_seg_results, sem_seg_losses = self.sem_seg_head(features, None) - proposals, _ = self.proposal_generator(images, features, None) - detector_results, _ = self.roi_heads(images, features, proposals, None) - - if do_postprocess: - processed_results = [] - for sem_seg_result, detector_result, input_per_image, image_size in zip( - sem_seg_results, detector_results, batched_inputs, images.image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - sem_seg_r = sem_seg_postprocess(sem_seg_result, image_size, height, width) - detector_r = detector_postprocess(detector_result, height, width) - - processed_results.append({"sem_seg": sem_seg_r, "instances": detector_r}) - - panoptic_r = combine_semantic_and_instance_outputs( - detector_r, - sem_seg_r.argmax(dim=0), - self.combine_overlap_thresh, - self.combine_stuff_area_thresh, - self.combine_instances_score_thresh, - ) - processed_results[-1]["panoptic_seg"] = panoptic_r - return processed_results - else: - return detector_results, sem_seg_results - - -def combine_semantic_and_instance_outputs( - instance_results, - semantic_results, - overlap_threshold, - stuff_area_thresh, - instances_score_thresh, -): - """ - Implement a simple combining logic following - "combine_semantic_and_instance_predictions.py" in panopticapi - to produce panoptic segmentation outputs. - - Args: - instance_results: output of :func:`detector_postprocess`. - semantic_results: an (H, W) tensor, each element is the contiguous semantic - category id - - Returns: - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment. - segments_info (list[dict]): Describe each segment in `panoptic_seg`. - Each dict contains keys "id", "category_id", "isthing". - """ - panoptic_seg = torch.zeros_like(semantic_results, dtype=torch.int32) - - # sort instance outputs by scores - sorted_inds = torch.argsort(-instance_results.scores) - - current_segment_id = 0 - segments_info = [] - - instance_masks = instance_results.pred_masks.to(dtype=torch.bool, device=panoptic_seg.device) - - # Add instances one-by-one, check for overlaps with existing ones - for inst_id in sorted_inds: - score = instance_results.scores[inst_id].item() - if score < instances_score_thresh: - break - mask = instance_masks[inst_id] # H,W - mask_area = mask.sum().item() - - if mask_area == 0: - continue - - intersect = (mask > 0) & (panoptic_seg > 0) - intersect_area = intersect.sum().item() - - if intersect_area * 1.0 / mask_area > overlap_threshold: - continue - - if intersect_area > 0: - mask = mask & (panoptic_seg == 0) - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - segments_info.append( - { - "id": current_segment_id, - "isthing": True, - "score": score, - "category_id": instance_results.pred_classes[inst_id].item(), - "instance_id": inst_id.item(), - } - ) - - # Add semantic results to remaining empty areas - semantic_labels = torch.unique(semantic_results).cpu().tolist() - for semantic_label in semantic_labels: - if semantic_label == 0: # 0 is a special "thing" class - continue - mask = (semantic_results == semantic_label) & (panoptic_seg == 0) - mask_area = mask.sum().item() - if mask_area < stuff_area_thresh: - continue - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - segments_info.append( - { - "id": current_segment_id, - "isthing": False, - "category_id": semantic_label, - "area": mask_area, - } - ) - - return panoptic_seg, segments_info diff --git a/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/models/Pooling.py b/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/models/Pooling.py deleted file mode 100644 index f75b16509d40d697de5a349cf261b0eceb4a4e87..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/models/Pooling.py +++ /dev/null @@ -1,120 +0,0 @@ -import torch -from torch import Tensor -from torch import nn -from typing import Union, Tuple, List, Iterable, Dict -import os -import json - - -class Pooling(nn.Module): - """Performs pooling (max or mean) on the token embeddings. - - Using pooling, it generates from a variable sized sentence a fixed sized sentence embedding. This layer also allows to use the CLS token if it is returned by the underlying word embedding model. - You can concatenate multiple poolings together. - - :param word_embedding_dimension: Dimensions for the word embeddings - :param pooling_mode: Can be a string: mean/max/cls. If set, overwrites the other pooling_mode_* settings - :param pooling_mode_cls_token: Use the first token (CLS token) as text representations - :param pooling_mode_max_tokens: Use max in each dimension over all tokens. - :param pooling_mode_mean_tokens: Perform mean-pooling - :param pooling_mode_mean_sqrt_len_tokens: Perform mean-pooling, but devide by sqrt(input_length). - """ - def __init__(self, - word_embedding_dimension: int, - pooling_mode: str = None, - pooling_mode_cls_token: bool = False, - pooling_mode_max_tokens: bool = False, - pooling_mode_mean_tokens: bool = True, - pooling_mode_mean_sqrt_len_tokens: bool = False, - ): - super(Pooling, self).__init__() - - self.config_keys = ['word_embedding_dimension', 'pooling_mode_cls_token', 'pooling_mode_mean_tokens', 'pooling_mode_max_tokens', 'pooling_mode_mean_sqrt_len_tokens'] - - if pooling_mode is not None: #Set pooling mode by string - pooling_mode = pooling_mode.lower() - assert pooling_mode in ['mean', 'max', 'cls'] - pooling_mode_cls_token = (pooling_mode == 'cls') - pooling_mode_max_tokens = (pooling_mode == 'max') - pooling_mode_mean_tokens = (pooling_mode == 'mean') - - self.word_embedding_dimension = word_embedding_dimension - self.pooling_mode_cls_token = pooling_mode_cls_token - self.pooling_mode_mean_tokens = pooling_mode_mean_tokens - self.pooling_mode_max_tokens = pooling_mode_max_tokens - self.pooling_mode_mean_sqrt_len_tokens = pooling_mode_mean_sqrt_len_tokens - - pooling_mode_multiplier = sum([pooling_mode_cls_token, pooling_mode_max_tokens, pooling_mode_mean_tokens, pooling_mode_mean_sqrt_len_tokens]) - self.pooling_output_dimension = (pooling_mode_multiplier * word_embedding_dimension) - - - def __repr__(self): - return "Pooling({})".format(self.get_config_dict()) - - def get_pooling_mode_str(self) -> str: - """ - Returns the pooling mode as string - """ - modes = [] - if self.pooling_mode_cls_token: - modes.append('cls') - if self.pooling_mode_mean_tokens: - modes.append('mean') - if self.pooling_mode_max_tokens: - modes.append('max') - if self.pooling_mode_mean_sqrt_len_tokens: - modes.append('mean_sqrt_len_tokens') - - return "+".join(modes) - - def forward(self, features: Dict[str, Tensor]): - token_embeddings = features['token_embeddings'] - attention_mask = features['attention_mask'] - - ## Pooling strategy - output_vectors = [] - if self.pooling_mode_cls_token: - cls_token = features.get('cls_token_embeddings', token_embeddings[:, 0]) # Take first token by default - output_vectors.append(cls_token) - if self.pooling_mode_max_tokens: - input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() - token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value - max_over_time = torch.max(token_embeddings, 1)[0] - output_vectors.append(max_over_time) - if self.pooling_mode_mean_tokens or self.pooling_mode_mean_sqrt_len_tokens: - input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() - sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) - - #If tokens are weighted (by WordWeights layer), feature 'token_weights_sum' will be present - if 'token_weights_sum' in features: - sum_mask = features['token_weights_sum'].unsqueeze(-1).expand(sum_embeddings.size()) - else: - sum_mask = input_mask_expanded.sum(1) - - sum_mask = torch.clamp(sum_mask, min=1e-9) - - if self.pooling_mode_mean_tokens: - output_vectors.append(sum_embeddings / sum_mask) - if self.pooling_mode_mean_sqrt_len_tokens: - output_vectors.append(sum_embeddings / torch.sqrt(sum_mask)) - - output_vector = torch.cat(output_vectors, 1) - features.update({'sentence_embedding': output_vector}) - return features - - def get_sentence_embedding_dimension(self): - return self.pooling_output_dimension - - def get_config_dict(self): - return {key: self.__dict__[key] for key in self.config_keys} - - def save(self, output_path): - with open(os.path.join(output_path, 'config.json'), 'w') as fOut: - json.dump(self.get_config_dict(), fOut, indent=2) - - @staticmethod - def load(input_path): - with open(os.path.join(input_path, 'config.json')) as fIn: - config = json.load(fIn) - - return Pooling(**config) diff --git a/spaces/chendl/compositional_test/multimodal/tools/prepare_mini_blip2_dataset.py b/spaces/chendl/compositional_test/multimodal/tools/prepare_mini_blip2_dataset.py deleted file mode 100644 index 3ffaee6c64ac04d650673503d75e405503ffbcd5..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/tools/prepare_mini_blip2_dataset.py +++ /dev/null @@ -1,178 +0,0 @@ -import webdataset as wds -import glob -import os -from tqdm import tqdm -import orjson as json -import itertools -from PIL import Image -import numpy as np -from typing import List - -class Generator(): - def __init__(self, dataset_name): - self.dataset_name = dataset_name - self.is_end = False - -class CC3MGenerator(Generator): - def __init__(self, root: str, dataset_name="cc3m"): - super().__init__(dataset_name=dataset_name) - self.tars = glob.glob(os.path.join(root, "cc3m_*", "*.tar")) - - def __len__(self): - return 3000000 - - def __iter__(self): - for tar in self.tars: - dataset = wds.WebDataset(tar).decode("pilrgb").to_tuple("jpg;png;jpeg", "txt") - for data in dataset: - yield [self.dataset_name] + list(data) - self.is_end = True - -class CC12MGenerator(CC3MGenerator): - def __init__(self, root: str): - super().__init__(root, "cc12m") - self.tars = glob.glob(os.path.join(root, "*.tar")) - - def __len__(self): - return 12000000 - -class COCOGenerator(Generator): - def __init__(self, anno: str, image_dir): - super().__init__(dataset_name="coco") - data = json.loads(open(anno).read()) - self.annotations = data["annotations"] - self.image_id_to_filename = {} - for image in data["images"]: - file_name = image["file_name"] - image_id = image["id"] - self.image_id_to_filename[image_id] = os.path.join(image_dir, file_name) - - def __len__(self): - return len(self.annotations) - - def __iter__(self): - for anno in self.annotations: - image_id = anno["image_id"] - caption = anno["caption"] - try: - image = Image.open(self.image_id_to_filename[image_id]) - except: - continue - yield [self.dataset_name, image, caption] - self.is_end = True - - -class KarpathyCOCOGenerator(Generator): - def __init__(self, anno="/gpfs/u/home/LMCG/LMCGljnn/scratch/code/multimodal/tools/coco_karpathy_train.json", image_dir="/gpfs/u/home/LMCG/LMCGljnn/scratch/.cache/lavis/coco/images"): - super().__init__(dataset_name="coco") - data = json.loads(open(anno).read()) - self.annotations = data - self.image_id_to_filename = {} - for d in data: - self.image_id_to_filename[d["image_id"]] = os.path.join(image_dir, d["image"]) - - def __len__(self): - return len(self.annotations) - - def __iter__(self): - for anno in self.annotations: - image_id = anno["image_id"] - caption = anno["caption"] - try: - image = Image.open(self.image_id_to_filename[image_id]) - except: - print(self.image_id_to_filename[image_id]) - yield [self.dataset_name, image, caption] - self.is_end = True - - -class VisualGenomeGenerator(Generator): - def __init__(self, root: str): - super().__init__(dataset_name="vg") - data = json.loads(open(os.path.join(root, "region_descriptions.json")).read()) - image_data = json.loads(open(os.path.join(root, "image_data.json")).read()) - self.image_id_to_filename = {} - self.image_id_to_wh = {} - for image in image_data: - image_id = image["image_id"] - subfolder, filename = image['url'].split("/")[-2:] - self.image_id_to_filename[image_id] = os.path.join(root, subfolder, filename) - self.image_id_to_wh[image_id] = (image["width"], image["height"]) - self.regions = [] - total = 0 - total_image = 0 - used_image = 0 - for xx in data: - total_image += 1 - flag = False - for region in xx['regions']: - total += 1 - region_w = int(region["width"]) - region_h = int(region["height"]) - image_w = self.image_id_to_wh[region["image_id"]][0] - image_h = self.image_id_to_wh[region["image_id"]][1] - if region_w * region_h < (image_w * image_h) * 0.2: - continue - self.regions.append(region) - flag = True - if flag: - used_image += 1 - print("valid region", len(self.regions), total, len(self.regions) / total) - print("valid image", used_image, total_image, used_image / total_image) - - def __len__(self): - return len(self.regions) - - def __iter__(self): - for region in self.regions: - image_id = region["image_id"] - phrase = region["phrase"] - try: - image = Image.open(self.image_id_to_filename[image_id]) - except: - continue - yield [self.dataset_name, image, phrase] - self.is_end = True - -class ShuffleGenerator(): - def __init__(self, generators: List[Generator], p: List[int]): - self.generators = generators - self.p = list(np.array(p) / sum(p)) - self.ids = list(range(len(self.generators))) - print("rebalance", self.ids, self.p) - - def __len__(self): - return sum([len(g) for g in self.generators]) - - def __iter__(self): - while True: - if len(self.ids) == 0: - break - id = np.random.choice(self.ids, p=self.p) - gen = self.generators[id] - if gen.is_end: - print(gen.dataset_name, "is all done") - del self.ids[id] - del self.p[id] - self.p = list(np.array(self.p) / sum(p)) - print("rebalance", self.ids, self.p) - else: - return iter(gen) - - -if __name__ == "__main__": - OUT_DIR = "/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/vg_withBox_wds" - os.makedirs(OUT_DIR, exist_ok=True) - # cc3m_generator = CC3MGenerator("/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/cc3m") - # cc12m_generator = CC12MGenerator("/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/cc12m/tars") - # coco_generator = KarpathyCOCOGenerator() - visual_genome_generator = VisualGenomeGenerator("/gpfs/u/home/LMCG/LMCGljnn/scratch/datasets/raw/vg") - # generators = [cc3m_generator, cc12m_generator, coco_generator, visual_genome_generator] - # p = [len(generator) for generator in generators] - # dataset = ShuffleGenerator(generators, p) - - with wds.ShardWriter(os.path.join(OUT_DIR, "%05d.tar"), maxcount=8500) as sink: - sink.verbose = False - for i, data in enumerate(tqdm(visual_genome_generator)): - dataset_name, image, caption = data - sink.write({"__key__": f"{dataset_name}_{i}_containBox", "jpg": image, "txt": caption}) diff --git a/spaces/chendl/compositional_test/transformers/docs/source/en/_config.py b/spaces/chendl/compositional_test/transformers/docs/source/en/_config.py deleted file mode 100644 index cd76263e9a5cb2cc1a9e3e5709c44fd65331942f..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/docs/source/en/_config.py +++ /dev/null @@ -1,14 +0,0 @@ -# docstyle-ignore -INSTALL_CONTENT = """ -# Transformers installation -! pip install transformers datasets -# To install from source instead of the last release, comment the command above and uncomment the following one. -# ! pip install git+https://github.com/huggingface/transformers.git -""" - -notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] -black_avoid_patterns = { - "{processor_class}": "FakeProcessorClass", - "{model_class}": "FakeModelClass", - "{object_class}": "FakeObjectClass", -} diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run_tf_ner.py b/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run_tf_ner.py deleted file mode 100644 index a9c41d58183d4a8aa02f37430b381aba9dd3c45b..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run_tf_ner.py +++ /dev/null @@ -1,310 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2018 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Fine-tuning the library models for named entity recognition.""" - - -import logging -import os -from dataclasses import dataclass, field -from importlib import import_module -from typing import Dict, List, Optional, Tuple - -import numpy as np -from seqeval.metrics import classification_report, f1_score, precision_score, recall_score -from utils_ner import Split, TFTokenClassificationDataset, TokenClassificationTask - -from transformers import ( - AutoConfig, - AutoTokenizer, - EvalPrediction, - HfArgumentParser, - TFAutoModelForTokenClassification, - TFTrainer, - TFTrainingArguments, -) -from transformers.utils import logging as hf_logging - - -hf_logging.set_verbosity_info() -hf_logging.enable_default_handler() -hf_logging.enable_explicit_format() - - -logger = logging.getLogger(__name__) - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. - """ - - model_name_or_path: str = field( - metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} - ) - config_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} - ) - task_type: Optional[str] = field( - default="NER", metadata={"help": "Task type to fine tune in training (e.g. NER, POS, etc)"} - ) - tokenizer_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} - ) - use_fast: bool = field(default=False, metadata={"help": "Set this flag to use fast tokenization."}) - # If you want to tweak more attributes on your tokenizer, you should do it in a distinct script, - # or just modify its tokenizer_config.json. - cache_dir: Optional[str] = field( - default=None, - metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}, - ) - - -@dataclass -class DataTrainingArguments: - """ - Arguments pertaining to what data we are going to input our model for training and eval. - """ - - data_dir: str = field( - metadata={"help": "The input data dir. Should contain the .txt files for a CoNLL-2003-formatted task."} - ) - labels: Optional[str] = field( - metadata={"help": "Path to a file containing all labels. If not specified, CoNLL-2003 labels are used."} - ) - max_seq_length: int = field( - default=128, - metadata={ - "help": ( - "The maximum total input sequence length after tokenization. Sequences longer " - "than this will be truncated, sequences shorter will be padded." - ) - }, - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} - ) - - -def main(): - # See all possible arguments in src/transformers/training_args.py - # or by passing the --help flag to this script. - # We now keep distinct sets of args, for a cleaner separation of concerns. - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments)) - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - if ( - os.path.exists(training_args.output_dir) - and os.listdir(training_args.output_dir) - and training_args.do_train - and not training_args.overwrite_output_dir - ): - raise ValueError( - f"Output directory ({training_args.output_dir}) already exists and is not empty. Use" - " --overwrite_output_dir to overcome." - ) - - module = import_module("tasks") - - try: - token_classification_task_clazz = getattr(module, model_args.task_type) - token_classification_task: TokenClassificationTask = token_classification_task_clazz() - except AttributeError: - raise ValueError( - f"Task {model_args.task_type} needs to be defined as a TokenClassificationTask subclass in {module}. " - f"Available tasks classes are: {TokenClassificationTask.__subclasses__()}" - ) - - # Setup logging - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info( - "n_replicas: %s, distributed training: %s, 16-bits training: %s", - training_args.n_replicas, - bool(training_args.n_replicas > 1), - training_args.fp16, - ) - logger.info("Training/evaluation parameters %s", training_args) - - # Prepare Token Classification task - labels = token_classification_task.get_labels(data_args.labels) - label_map: Dict[int, str] = dict(enumerate(labels)) - num_labels = len(labels) - - # Load pretrained model and tokenizer - # - # Distributed training: - # The .from_pretrained methods guarantee that only one local process can concurrently - # download model & vocab. - - config = AutoConfig.from_pretrained( - model_args.config_name if model_args.config_name else model_args.model_name_or_path, - num_labels=num_labels, - id2label=label_map, - label2id={label: i for i, label in enumerate(labels)}, - cache_dir=model_args.cache_dir, - ) - tokenizer = AutoTokenizer.from_pretrained( - model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - use_fast=model_args.use_fast, - ) - - with training_args.strategy.scope(): - model = TFAutoModelForTokenClassification.from_pretrained( - model_args.model_name_or_path, - from_pt=bool(".bin" in model_args.model_name_or_path), - config=config, - cache_dir=model_args.cache_dir, - ) - - # Get datasets - train_dataset = ( - TFTokenClassificationDataset( - token_classification_task=token_classification_task, - data_dir=data_args.data_dir, - tokenizer=tokenizer, - labels=labels, - model_type=config.model_type, - max_seq_length=data_args.max_seq_length, - overwrite_cache=data_args.overwrite_cache, - mode=Split.train, - ) - if training_args.do_train - else None - ) - eval_dataset = ( - TFTokenClassificationDataset( - token_classification_task=token_classification_task, - data_dir=data_args.data_dir, - tokenizer=tokenizer, - labels=labels, - model_type=config.model_type, - max_seq_length=data_args.max_seq_length, - overwrite_cache=data_args.overwrite_cache, - mode=Split.dev, - ) - if training_args.do_eval - else None - ) - - def align_predictions(predictions: np.ndarray, label_ids: np.ndarray) -> Tuple[List[int], List[int]]: - preds = np.argmax(predictions, axis=2) - batch_size, seq_len = preds.shape - out_label_list = [[] for _ in range(batch_size)] - preds_list = [[] for _ in range(batch_size)] - - for i in range(batch_size): - for j in range(seq_len): - if label_ids[i, j] != -100: - out_label_list[i].append(label_map[label_ids[i][j]]) - preds_list[i].append(label_map[preds[i][j]]) - - return preds_list, out_label_list - - def compute_metrics(p: EvalPrediction) -> Dict: - preds_list, out_label_list = align_predictions(p.predictions, p.label_ids) - - return { - "precision": precision_score(out_label_list, preds_list), - "recall": recall_score(out_label_list, preds_list), - "f1": f1_score(out_label_list, preds_list), - } - - # Initialize our Trainer - trainer = TFTrainer( - model=model, - args=training_args, - train_dataset=train_dataset.get_dataset() if train_dataset else None, - eval_dataset=eval_dataset.get_dataset() if eval_dataset else None, - compute_metrics=compute_metrics, - ) - - # Training - if training_args.do_train: - trainer.train() - trainer.save_model() - tokenizer.save_pretrained(training_args.output_dir) - - # Evaluation - results = {} - if training_args.do_eval: - logger.info("*** Evaluate ***") - - result = trainer.evaluate() - output_eval_file = os.path.join(training_args.output_dir, "eval_results.txt") - - with open(output_eval_file, "w") as writer: - logger.info("***** Eval results *****") - - for key, value in result.items(): - logger.info(" %s = %s", key, value) - writer.write("%s = %s\n" % (key, value)) - - results.update(result) - - # Predict - if training_args.do_predict: - test_dataset = TFTokenClassificationDataset( - token_classification_task=token_classification_task, - data_dir=data_args.data_dir, - tokenizer=tokenizer, - labels=labels, - model_type=config.model_type, - max_seq_length=data_args.max_seq_length, - overwrite_cache=data_args.overwrite_cache, - mode=Split.test, - ) - - predictions, label_ids, metrics = trainer.predict(test_dataset.get_dataset()) - preds_list, labels_list = align_predictions(predictions, label_ids) - report = classification_report(labels_list, preds_list) - - logger.info("\n%s", report) - - output_test_results_file = os.path.join(training_args.output_dir, "test_results.txt") - - with open(output_test_results_file, "w") as writer: - writer.write("%s\n" % report) - - # Save predictions - output_test_predictions_file = os.path.join(training_args.output_dir, "test_predictions.txt") - - with open(output_test_predictions_file, "w") as writer: - with open(os.path.join(data_args.data_dir, "test.txt"), "r") as f: - example_id = 0 - - for line in f: - if line.startswith("-DOCSTART-") or line == "" or line == "\n": - writer.write(line) - - if not preds_list[example_id]: - example_id += 1 - elif preds_list[example_id]: - output_line = line.split()[0] + " " + preds_list[example_id].pop(0) + "\n" - - writer.write(output_line) - else: - logger.warning("Maximum sequence length exceeded: No prediction for '%s'.", line.split()[0]) - - return results - - -if __name__ == "__main__": - main() diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/hybrid_clip/modeling_hybrid_clip.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/hybrid_clip/modeling_hybrid_clip.py deleted file mode 100644 index e60f07bdd0632515c8c3be11a207fb0a28da8442..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/hybrid_clip/modeling_hybrid_clip.py +++ /dev/null @@ -1,424 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Optional, Tuple - -import flax.linen as nn -import jax -import jax.numpy as jnp -from configuration_hybrid_clip import HybridCLIPConfig -from flax.core.frozen_dict import FrozenDict - -from transformers import FLAX_MODEL_MAPPING, FlaxCLIPVisionModel -from transformers.modeling_flax_utils import FlaxPreTrainedModel -from transformers.models.clip.modeling_flax_clip import FlaxCLIPOutput -from transformers.utils import logging - - -logger = logging.get_logger(__name__) - - -class FlaxHybridCLIPModule(nn.Module): - config: HybridCLIPConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - text_config = self.config.text_config - vision_config = self.config.vision_config - - self.projection_dim = self.config.projection_dim - self.text_embed_dim = text_config.hidden_size - self.vision_embed_dim = vision_config.hidden_size - - text_module = FLAX_MODEL_MAPPING[self.config.text_config.__class__].module_class - vision_module = FLAX_MODEL_MAPPING.get(self.config.vision_config.__class__, FlaxCLIPVisionModel).module_class - - self.text_model = text_module(text_config, dtype=self.dtype) - self.vision_model = vision_module(vision_config, dtype=self.dtype) - - self.visual_projection = nn.Dense( - self.projection_dim, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(0.02), - use_bias=False, - ) - self.text_projection = nn.Dense( - self.projection_dim, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(0.02), - use_bias=False, - ) - self.logit_scale = self.param("logit_scale", jax.nn.initializers.ones, []) - - def __call__( - self, - input_ids=None, - pixel_values=None, - attention_mask=None, - position_ids=None, - token_type_ids=None, - deterministic: bool = True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - return_dict = return_dict if return_dict is not None else self.config.return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[1] - image_embeds = self.visual_projection(image_embeds) - - text_embeds = text_outputs[1] - text_embeds = self.text_projection(text_embeds) - - # normalized features - image_embeds = image_embeds / jnp.linalg.norm(image_embeds, axis=-1, keepdims=True) - text_embeds = text_embeds / jnp.linalg.norm(text_embeds, axis=-1, keepdims=True) - - # cosine similarity as logits - logit_scale = jnp.exp(self.logit_scale) - logits_per_text = jnp.matmul(text_embeds, image_embeds.T) * logit_scale - logits_per_image = logits_per_text.T - - if not return_dict: - return (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs) - - return FlaxCLIPOutput( - logits_per_image=logits_per_image, - logits_per_text=logits_per_text, - text_embeds=text_embeds, - image_embeds=image_embeds, - text_model_output=text_outputs, - vision_model_output=vision_outputs, - ) - - -class FlaxHybridCLIP(FlaxPreTrainedModel): - config_class = HybridCLIPConfig - module_class = FlaxHybridCLIPModule - - def __init__( - self, - config: HybridCLIPConfig, - input_shape: Optional[Tuple] = None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs, - ): - if input_shape is None: - input_shape = ((1, 1), (1, config.vision_config.image_size, config.vision_config.image_size, 3)) - - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple, params: FrozenDict = None) -> FrozenDict: - # init input tensor - input_ids = jnp.zeros(input_shape[0], dtype="i4") - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape[0]) - token_type_ids = jnp.ones_like(input_ids) - attention_mask = jnp.ones_like(input_ids) - - pixel_values = jax.random.normal(rng, input_shape[1]) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - return self.module.init(rngs, input_ids, pixel_values, attention_mask, position_ids, token_type_ids)["params"] - - def __call__( - self, - input_ids, - pixel_values, - attention_mask=None, - position_ids=None, - token_type_ids=None, - params: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - return self.module.apply( - {"params": params or self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(pixel_values, dtype=jnp.float32), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - jnp.array(token_type_ids, dtype="i4"), - not train, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - ) - - def get_text_features( - self, - input_ids, - attention_mask=None, - position_ids=None, - token_type_ids=None, - params: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train=False, - ): - r""" - Args: - input_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - - Indices can be obtained using :class:`~transformers.PreTrainedTokenizer`. See - :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` - for details. - - `What are input IDs? <../glossary.html#input-ids>`__ - - Returns: - text_features (:obj:`jnp.ndarray` of shape :obj:`(batch_size, output_dim`): The text embeddings - obtained by applying the projection layer to the pooled output of text model. - """ - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _get_features(module, input_ids, attention_mask, position_ids, token_type_ids, deterministic): - text_outputs = module.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - token_type_ids=token_type_ids, - deterministic=deterministic, - ) - pooled_output = text_outputs[1] - text_features = module.text_projection(pooled_output) - return text_features - - return self.module.apply( - {"params": params or self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - jnp.array(token_type_ids, dtype="i4"), - not train, - method=_get_features, - rngs=rngs, - ) - - def get_image_features( - self, pixel_values, params: dict = None, dropout_rng: jax.random.PRNGKey = None, train=False - ): - r""" - Args: - pixel_values (:obj:`numpy.ndarray` of shape :obj:`(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained - using :class:`~transformers.ImageFeatureExtractionMixin`. See - :meth:`transformers.ImageFeatureExtractionMixin.__call__` for details. - - Returns: - image_features (:obj:`jnp.ndarray` of shape :obj:`(batch_size, output_dim`): The image embeddings - obtained by applying the projection layer to the pooled output of vision model. - """ - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _get_features(module, pixel_values, deterministic): - vision_outputs = module.vision_model(pixel_values=pixel_values, deterministic=deterministic) - pooled_output = vision_outputs[1] # pooled_output - image_features = module.visual_projection(pooled_output) - return image_features - - return self.module.apply( - {"params": params or self.params}, - jnp.array(pixel_values, dtype=jnp.float32), - not train, - method=_get_features, - rngs=rngs, - ) - - @classmethod - def from_text_vision_pretrained( - cls, - text_model_name_or_path: str = None, - vision_model_name_or_path: str = None, - *model_args, - **kwargs, - ) -> FlaxPreTrainedModel: - """ - Params: - text_model_name_or_path (:obj: `str`, `optional`): - Information necessary to initiate the text model. Can be either: - - - A string, the `model id` of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like ``bert-base-uncased``, or namespaced under - a user or organization name, like ``dbmdz/bert-base-german-cased``. - - A path to a `directory` containing model weights saved using - :func:`~transformers.FlaxPreTrainedModel.save_pretrained`, e.g., ``./my_model_directory/``. - - A path or url to a `PyTorch checkpoint folder` (e.g, ``./pt_model``). In - this case, ``from_pt`` should be set to :obj:`True` and a configuration object should be provided - as ``config`` argument. This loading path is slower than converting the PyTorch checkpoint in - a Flax model using the provided conversion scripts and loading the Flax model afterwards. - - vision_model_name_or_path (:obj: `str`, `optional`, defaults to `None`): - Information necessary to initiate the vision model. Can be either: - - - A string, the `model id` of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like ``bert-base-uncased``, or namespaced under - a user or organization name, like ``dbmdz/bert-base-german-cased``. - - A path to a `directory` containing model weights saved using - :func:`~transformers.FlaxPreTrainedModel.save_pretrained`, e.g., ``./my_model_directory/``. - - A path or url to a `PyTorch checkpoint folder` (e.g, ``./pt_model``). In - this case, ``from_pt`` should be set to :obj:`True` and a configuration object should be provided - as ``config`` argument. This loading path is slower than converting the PyTorch checkpoint in - a Flax model using the provided conversion scripts and loading the Flax model afterwards. - - model_args (remaining positional arguments, `optional`): - All remaning positional arguments will be passed to the underlying model's ``__init__`` method. - - kwargs (remaining dictionary of keyword arguments, `optional`): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - :obj:`output_attentions=True`). - - - To update the text configuration, use the prefix `text_` for each configuration parameter. - - To update the vision configuration, use the prefix `vision_` for each configuration parameter. - - To update the parent model configuration, do not use a prefix for each configuration parameter. - - Behaves differently depending on whether a :obj:`config` is provided or automatically loaded. - - Example:: - - >>> from transformers import FlaxHybridCLIP - >>> # initialize a model from pretrained BERT and CLIP models. Note that the projection layers will be randomly initialized. - >>> # If using CLIP's vision model the vision projection layer will be initialized using pre-trained weights - >>> model = FlaxHybridCLIP.from_text_vision_pretrained('bert-base-uncased', 'openai/clip-vit-base-patch32') - >>> # saving model after fine-tuning - >>> model.save_pretrained("./bert-clip") - >>> # load fine-tuned model - >>> model = FlaxHybridCLIP.from_pretrained("./bert-clip") - """ - - kwargs_text = { - argument[len("text_") :]: value for argument, value in kwargs.items() if argument.startswith("text_") - } - - kwargs_vision = { - argument[len("vision_") :]: value for argument, value in kwargs.items() if argument.startswith("vision_") - } - - # remove text, vision kwargs from kwargs - for key in kwargs_text.keys(): - del kwargs["text_" + key] - for key in kwargs_vision.keys(): - del kwargs["vision_" + key] - - # Load and initialize the text and vision model - text_model = kwargs_text.pop("model", None) - if text_model is None: - assert ( - text_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `text_model_name_or_path` has to be defined" - from transformers import FlaxAutoModel - - if "config" not in kwargs_text: - from transformers import AutoConfig - - text_config = AutoConfig.from_pretrained(text_model_name_or_path) - kwargs_text["config"] = text_config - - text_model = FlaxAutoModel.from_pretrained(text_model_name_or_path, *model_args, **kwargs_text) - - vision_model = kwargs_vision.pop("model", None) - if vision_model is None: - assert ( - vision_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `vision_model_name_or_path` has to be defined" - from transformers import FlaxAutoModel - - if "config" not in kwargs_vision: - from transformers import AutoConfig - - vision_config = AutoConfig.from_pretrained(vision_model_name_or_path) - kwargs_vision["config"] = vision_config - - vision_model = FlaxAutoModel.from_pretrained(vision_model_name_or_path, *model_args, **kwargs_vision) - - # instantiate config with corresponding kwargs - dtype = kwargs.pop("dtype", jnp.float32) - config = HybridCLIPConfig.from_text_vision_configs(text_model.config, vision_model.config, **kwargs) - - # init model - model = cls(config, *model_args, dtype=dtype, **kwargs) - - if vision_config.model_type == "clip": - model.params["vision_model"]["vision_model"] = vision_model.params["vision_model"] - model.params["visual_projection"]["kernel"] = vision_model.params["visual_projection"]["kernel"] - else: - model.params["vision_model"] = vision_model.params - - model.params["text_model"] = text_model.params - - return model diff --git a/spaces/cherry0021/lab-ni-doc/build_push_all.sh b/spaces/cherry0021/lab-ni-doc/build_push_all.sh deleted file mode 100644 index 9ac6cdaead29be7c0575ac0cd025228df7d265c0..0000000000000000000000000000000000000000 --- a/spaces/cherry0021/lab-ni-doc/build_push_all.sh +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/env bash -cd $(cd -P -- "$(dirname -- "$0")" && pwd -P) - -# extract the branch-name that is built and pushed -export TAGNAME=$(git symbolic-ref -q HEAD) -export TAGNAME=${TAGNAME##refs/heads/} -export TAGNAME=${TAGNAME:-HEAD} -# manually set tag -# export TAGNAME="v1.5_cuda-11.6_ubuntu-20.04" -echo "Build and push images full, python-only & slim for branch '$TAGNAME'." -if [[ "$TAGNAME" != "v"*"_cuda-"*"_ubuntu-"* ]]; then - echo "ERROR, build_push_all.sh only possible within branches of shape 'v'*'_cuda-'*'_ubuntu-'*." - exit 1 -fi - -###################### build, run and push full image ########################## -echo -echo -echo "build, run and push full image with tag $TAGNAME." -bash generate-Dockerfile.sh -docker build --no-cache -t cschranz/gpu-jupyter:$TAGNAME .build/ # build this from a fresh install - -export IMG_ID=$(docker image ls | grep $TAGNAME | grep -v _python-only | grep -v _slim | head -1 | awk '{print $3}') -echo "push image with ID $IMG_ID and Tag $TAGNAME ." - -docker tag $IMG_ID cschranz/gpu-jupyter:$TAGNAME -docker rm -f gpu-jupyter_1 -docker run --gpus all -d -it -p 8848:8888 -v $(pwd)/data:/home/jovyan/work -e GRANT_SUDO=yes -e JUPYTER_ENABLE_LAB=yes --user root --restart always --name gpu-jupyter_1 cschranz/gpu-jupyter:$TAGNAME - -docker push cschranz/gpu-jupyter:$TAGNAME - - -###################### build and push slim image ########################## -echo -echo -echo "build and push slim image with tag ${TAGNAME}_slim." -bash generate-Dockerfile.sh --slim -docker build -t cschranz/gpu-jupyter:${TAGNAME}_slim .build/ - -export IMG_ID=$(docker image ls | grep ${TAGNAME}_slim | head -1 | awk '{print $3}') -echo "push image with ID $IMG_ID and Tag ${TAGNAME}_slim." - -docker tag $IMG_ID cschranz/gpu-jupyter:${TAGNAME}_slim -docker push cschranz/gpu-jupyter:${TAGNAME}_slim - - -###################### build and push python-only image ########################## -echo -echo -echo "build and push slim image with tag ${TAGNAME}_python-only." -bash generate-Dockerfile.sh --python-only -docker build -t cschranz/gpu-jupyter:${TAGNAME}_python-only .build/ - -export IMG_ID=$(docker image ls | grep ${TAGNAME}_python-only | head -1 | awk '{print $3}') -echo "push image with ID $IMG_ID and Tag ${TAGNAME}_python-only." - -docker tag $IMG_ID cschranz/gpu-jupyter:${TAGNAME}_python-only -docker push cschranz/gpu-jupyter:${TAGNAME}_python-only diff --git a/spaces/chilge/Fushimi/commons.py b/spaces/chilge/Fushimi/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/chilge/Fushimi/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageFile.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageFile.py deleted file mode 100644 index 8e4f7dfb2c8854ee3a1f65efd6535732df1764aa..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageFile.py +++ /dev/null @@ -1,773 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# base class for image file handlers -# -# history: -# 1995-09-09 fl Created -# 1996-03-11 fl Fixed load mechanism. -# 1996-04-15 fl Added pcx/xbm decoders. -# 1996-04-30 fl Added encoders. -# 1996-12-14 fl Added load helpers -# 1997-01-11 fl Use encode_to_file where possible -# 1997-08-27 fl Flush output in _save -# 1998-03-05 fl Use memory mapping for some modes -# 1999-02-04 fl Use memory mapping also for "I;16" and "I;16B" -# 1999-05-31 fl Added image parser -# 2000-10-12 fl Set readonly flag on memory-mapped images -# 2002-03-20 fl Use better messages for common decoder errors -# 2003-04-21 fl Fall back on mmap/map_buffer if map is not available -# 2003-10-30 fl Added StubImageFile class -# 2004-02-25 fl Made incremental parser more robust -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1995-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import io -import itertools -import struct -import sys - -from . import Image -from ._util import is_path - -MAXBLOCK = 65536 - -SAFEBLOCK = 1024 * 1024 - -LOAD_TRUNCATED_IMAGES = False -"""Whether or not to load truncated image files. User code may change this.""" - -ERRORS = { - -1: "image buffer overrun error", - -2: "decoding error", - -3: "unknown error", - -8: "bad configuration", - -9: "out of memory error", -} -""" -Dict of known error codes returned from :meth:`.PyDecoder.decode`, -:meth:`.PyEncoder.encode` :meth:`.PyEncoder.encode_to_pyfd` and -:meth:`.PyEncoder.encode_to_file`. -""" - - -# -# -------------------------------------------------------------------- -# Helpers - - -def raise_oserror(error): - try: - msg = Image.core.getcodecstatus(error) - except AttributeError: - msg = ERRORS.get(error) - if not msg: - msg = f"decoder error {error}" - msg += " when reading image file" - raise OSError(msg) - - -def _tilesort(t): - # sort on offset - return t[2] - - -# -# -------------------------------------------------------------------- -# ImageFile base class - - -class ImageFile(Image.Image): - """Base class for image file format handlers.""" - - def __init__(self, fp=None, filename=None): - super().__init__() - - self._min_frame = 0 - - self.custom_mimetype = None - - self.tile = None - """ A list of tile descriptors, or ``None`` """ - - self.readonly = 1 # until we know better - - self.decoderconfig = () - self.decodermaxblock = MAXBLOCK - - if is_path(fp): - # filename - self.fp = open(fp, "rb") - self.filename = fp - self._exclusive_fp = True - else: - # stream - self.fp = fp - self.filename = filename - # can be overridden - self._exclusive_fp = None - - try: - try: - self._open() - except ( - IndexError, # end of data - TypeError, # end of data (ord) - KeyError, # unsupported mode - EOFError, # got header but not the first frame - struct.error, - ) as v: - raise SyntaxError(v) from v - - if not self.mode or self.size[0] <= 0 or self.size[1] <= 0: - msg = "not identified by this driver" - raise SyntaxError(msg) - except BaseException: - # close the file only if we have opened it this constructor - if self._exclusive_fp: - self.fp.close() - raise - - def get_format_mimetype(self): - if self.custom_mimetype: - return self.custom_mimetype - if self.format is not None: - return Image.MIME.get(self.format.upper()) - - def __setstate__(self, state): - self.tile = [] - super().__setstate__(state) - - def verify(self): - """Check file integrity""" - - # raise exception if something's wrong. must be called - # directly after open, and closes file when finished. - if self._exclusive_fp: - self.fp.close() - self.fp = None - - def load(self): - """Load image data based on tile list""" - - if self.tile is None: - msg = "cannot load this image" - raise OSError(msg) - - pixel = Image.Image.load(self) - if not self.tile: - return pixel - - self.map = None - use_mmap = self.filename and len(self.tile) == 1 - # As of pypy 2.1.0, memory mapping was failing here. - use_mmap = use_mmap and not hasattr(sys, "pypy_version_info") - - readonly = 0 - - # look for read/seek overrides - try: - read = self.load_read - # don't use mmap if there are custom read/seek functions - use_mmap = False - except AttributeError: - read = self.fp.read - - try: - seek = self.load_seek - use_mmap = False - except AttributeError: - seek = self.fp.seek - - if use_mmap: - # try memory mapping - decoder_name, extents, offset, args = self.tile[0] - if ( - decoder_name == "raw" - and len(args) >= 3 - and args[0] == self.mode - and args[0] in Image._MAPMODES - ): - try: - # use mmap, if possible - import mmap - - with open(self.filename) as fp: - self.map = mmap.mmap(fp.fileno(), 0, access=mmap.ACCESS_READ) - if offset + self.size[1] * args[1] > self.map.size(): - # buffer is not large enough - raise OSError - self.im = Image.core.map_buffer( - self.map, self.size, decoder_name, offset, args - ) - readonly = 1 - # After trashing self.im, - # we might need to reload the palette data. - if self.palette: - self.palette.dirty = 1 - except (AttributeError, OSError, ImportError): - self.map = None - - self.load_prepare() - err_code = -3 # initialize to unknown error - if not self.map: - # sort tiles in file order - self.tile.sort(key=_tilesort) - - try: - # FIXME: This is a hack to handle TIFF's JpegTables tag. - prefix = self.tile_prefix - except AttributeError: - prefix = b"" - - # Remove consecutive duplicates that only differ by their offset - self.tile = [ - list(tiles)[-1] - for _, tiles in itertools.groupby( - self.tile, lambda tile: (tile[0], tile[1], tile[3]) - ) - ] - for decoder_name, extents, offset, args in self.tile: - seek(offset) - decoder = Image._getdecoder( - self.mode, decoder_name, args, self.decoderconfig - ) - try: - decoder.setimage(self.im, extents) - if decoder.pulls_fd: - decoder.setfd(self.fp) - err_code = decoder.decode(b"")[1] - else: - b = prefix - while True: - try: - s = read(self.decodermaxblock) - except (IndexError, struct.error) as e: - # truncated png/gif - if LOAD_TRUNCATED_IMAGES: - break - else: - msg = "image file is truncated" - raise OSError(msg) from e - - if not s: # truncated jpeg - if LOAD_TRUNCATED_IMAGES: - break - else: - msg = ( - "image file is truncated " - f"({len(b)} bytes not processed)" - ) - raise OSError(msg) - - b = b + s - n, err_code = decoder.decode(b) - if n < 0: - break - b = b[n:] - finally: - # Need to cleanup here to prevent leaks - decoder.cleanup() - - self.tile = [] - self.readonly = readonly - - self.load_end() - - if self._exclusive_fp and self._close_exclusive_fp_after_loading: - self.fp.close() - self.fp = None - - if not self.map and not LOAD_TRUNCATED_IMAGES and err_code < 0: - # still raised if decoder fails to return anything - raise_oserror(err_code) - - return Image.Image.load(self) - - def load_prepare(self): - # create image memory if necessary - if not self.im or self.im.mode != self.mode or self.im.size != self.size: - self.im = Image.core.new(self.mode, self.size) - # create palette (optional) - if self.mode == "P": - Image.Image.load(self) - - def load_end(self): - # may be overridden - pass - - # may be defined for contained formats - # def load_seek(self, pos): - # pass - - # may be defined for blocked formats (e.g. PNG) - # def load_read(self, bytes): - # pass - - def _seek_check(self, frame): - if ( - frame < self._min_frame - # Only check upper limit on frames if additional seek operations - # are not required to do so - or ( - not (hasattr(self, "_n_frames") and self._n_frames is None) - and frame >= self.n_frames + self._min_frame - ) - ): - msg = "attempt to seek outside sequence" - raise EOFError(msg) - - return self.tell() != frame - - -class StubImageFile(ImageFile): - """ - Base class for stub image loaders. - - A stub loader is an image loader that can identify files of a - certain format, but relies on external code to load the file. - """ - - def _open(self): - msg = "StubImageFile subclass must implement _open" - raise NotImplementedError(msg) - - def load(self): - loader = self._load() - if loader is None: - msg = f"cannot find loader for this {self.format} file" - raise OSError(msg) - image = loader.load(self) - assert image is not None - # become the other object (!) - self.__class__ = image.__class__ - self.__dict__ = image.__dict__ - return image.load() - - def _load(self): - """(Hook) Find actual image loader.""" - msg = "StubImageFile subclass must implement _load" - raise NotImplementedError(msg) - - -class Parser: - """ - Incremental image parser. This class implements the standard - feed/close consumer interface. - """ - - incremental = None - image = None - data = None - decoder = None - offset = 0 - finished = 0 - - def reset(self): - """ - (Consumer) Reset the parser. Note that you can only call this - method immediately after you've created a parser; parser - instances cannot be reused. - """ - assert self.data is None, "cannot reuse parsers" - - def feed(self, data): - """ - (Consumer) Feed data to the parser. - - :param data: A string buffer. - :exception OSError: If the parser failed to parse the image file. - """ - # collect data - - if self.finished: - return - - if self.data is None: - self.data = data - else: - self.data = self.data + data - - # parse what we have - if self.decoder: - if self.offset > 0: - # skip header - skip = min(len(self.data), self.offset) - self.data = self.data[skip:] - self.offset = self.offset - skip - if self.offset > 0 or not self.data: - return - - n, e = self.decoder.decode(self.data) - - if n < 0: - # end of stream - self.data = None - self.finished = 1 - if e < 0: - # decoding error - self.image = None - raise_oserror(e) - else: - # end of image - return - self.data = self.data[n:] - - elif self.image: - # if we end up here with no decoder, this file cannot - # be incrementally parsed. wait until we've gotten all - # available data - pass - - else: - # attempt to open this file - try: - with io.BytesIO(self.data) as fp: - im = Image.open(fp) - except OSError: - # traceback.print_exc() - pass # not enough data - else: - flag = hasattr(im, "load_seek") or hasattr(im, "load_read") - if flag or len(im.tile) != 1: - # custom load code, or multiple tiles - self.decode = None - else: - # initialize decoder - im.load_prepare() - d, e, o, a = im.tile[0] - im.tile = [] - self.decoder = Image._getdecoder(im.mode, d, a, im.decoderconfig) - self.decoder.setimage(im.im, e) - - # calculate decoder offset - self.offset = o - if self.offset <= len(self.data): - self.data = self.data[self.offset :] - self.offset = 0 - - self.image = im - - def __enter__(self): - return self - - def __exit__(self, *args): - self.close() - - def close(self): - """ - (Consumer) Close the stream. - - :returns: An image object. - :exception OSError: If the parser failed to parse the image file either - because it cannot be identified or cannot be - decoded. - """ - # finish decoding - if self.decoder: - # get rid of what's left in the buffers - self.feed(b"") - self.data = self.decoder = None - if not self.finished: - msg = "image was incomplete" - raise OSError(msg) - if not self.image: - msg = "cannot parse this image" - raise OSError(msg) - if self.data: - # incremental parsing not possible; reopen the file - # not that we have all data - with io.BytesIO(self.data) as fp: - try: - self.image = Image.open(fp) - finally: - self.image.load() - return self.image - - -# -------------------------------------------------------------------- - - -def _save(im, fp, tile, bufsize=0): - """Helper to save image based on tile list - - :param im: Image object. - :param fp: File object. - :param tile: Tile list. - :param bufsize: Optional buffer size - """ - - im.load() - if not hasattr(im, "encoderconfig"): - im.encoderconfig = () - tile.sort(key=_tilesort) - # FIXME: make MAXBLOCK a configuration parameter - # It would be great if we could have the encoder specify what it needs - # But, it would need at least the image size in most cases. RawEncode is - # a tricky case. - bufsize = max(MAXBLOCK, bufsize, im.size[0] * 4) # see RawEncode.c - try: - fh = fp.fileno() - fp.flush() - _encode_tile(im, fp, tile, bufsize, fh) - except (AttributeError, io.UnsupportedOperation) as exc: - _encode_tile(im, fp, tile, bufsize, None, exc) - if hasattr(fp, "flush"): - fp.flush() - - -def _encode_tile(im, fp, tile, bufsize, fh, exc=None): - for e, b, o, a in tile: - if o > 0: - fp.seek(o) - encoder = Image._getencoder(im.mode, e, a, im.encoderconfig) - try: - encoder.setimage(im.im, b) - if encoder.pushes_fd: - encoder.setfd(fp) - errcode = encoder.encode_to_pyfd()[1] - else: - if exc: - # compress to Python file-compatible object - while True: - errcode, data = encoder.encode(bufsize)[1:] - fp.write(data) - if errcode: - break - else: - # slight speedup: compress to real file object - errcode = encoder.encode_to_file(fh, bufsize) - if errcode < 0: - msg = f"encoder error {errcode} when writing image file" - raise OSError(msg) from exc - finally: - encoder.cleanup() - - -def _safe_read(fp, size): - """ - Reads large blocks in a safe way. Unlike fp.read(n), this function - doesn't trust the user. If the requested size is larger than - SAFEBLOCK, the file is read block by block. - - :param fp: File handle. Must implement a read method. - :param size: Number of bytes to read. - :returns: A string containing size bytes of data. - - Raises an OSError if the file is truncated and the read cannot be completed - - """ - if size <= 0: - return b"" - if size <= SAFEBLOCK: - data = fp.read(size) - if len(data) < size: - msg = "Truncated File Read" - raise OSError(msg) - return data - data = [] - remaining_size = size - while remaining_size > 0: - block = fp.read(min(remaining_size, SAFEBLOCK)) - if not block: - break - data.append(block) - remaining_size -= len(block) - if sum(len(d) for d in data) < size: - msg = "Truncated File Read" - raise OSError(msg) - return b"".join(data) - - -class PyCodecState: - def __init__(self): - self.xsize = 0 - self.ysize = 0 - self.xoff = 0 - self.yoff = 0 - - def extents(self): - return self.xoff, self.yoff, self.xoff + self.xsize, self.yoff + self.ysize - - -class PyCodec: - def __init__(self, mode, *args): - self.im = None - self.state = PyCodecState() - self.fd = None - self.mode = mode - self.init(args) - - def init(self, args): - """ - Override to perform codec specific initialization - - :param args: Array of args items from the tile entry - :returns: None - """ - self.args = args - - def cleanup(self): - """ - Override to perform codec specific cleanup - - :returns: None - """ - pass - - def setfd(self, fd): - """ - Called from ImageFile to set the Python file-like object - - :param fd: A Python file-like object - :returns: None - """ - self.fd = fd - - def setimage(self, im, extents=None): - """ - Called from ImageFile to set the core output image for the codec - - :param im: A core image object - :param extents: a 4 tuple of (x0, y0, x1, y1) defining the rectangle - for this tile - :returns: None - """ - - # following c code - self.im = im - - if extents: - (x0, y0, x1, y1) = extents - else: - (x0, y0, x1, y1) = (0, 0, 0, 0) - - if x0 == 0 and x1 == 0: - self.state.xsize, self.state.ysize = self.im.size - else: - self.state.xoff = x0 - self.state.yoff = y0 - self.state.xsize = x1 - x0 - self.state.ysize = y1 - y0 - - if self.state.xsize <= 0 or self.state.ysize <= 0: - msg = "Size cannot be negative" - raise ValueError(msg) - - if ( - self.state.xsize + self.state.xoff > self.im.size[0] - or self.state.ysize + self.state.yoff > self.im.size[1] - ): - msg = "Tile cannot extend outside image" - raise ValueError(msg) - - -class PyDecoder(PyCodec): - """ - Python implementation of a format decoder. Override this class and - add the decoding logic in the :meth:`decode` method. - - See :ref:`Writing Your Own File Codec in Python` - """ - - _pulls_fd = False - - @property - def pulls_fd(self): - return self._pulls_fd - - def decode(self, buffer): - """ - Override to perform the decoding process. - - :param buffer: A bytes object with the data to be decoded. - :returns: A tuple of ``(bytes consumed, errcode)``. - If finished with decoding return -1 for the bytes consumed. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - raise NotImplementedError() - - def set_as_raw(self, data, rawmode=None): - """ - Convenience method to set the internal image from a stream of raw data - - :param data: Bytes to be set - :param rawmode: The rawmode to be used for the decoder. - If not specified, it will default to the mode of the image - :returns: None - """ - - if not rawmode: - rawmode = self.mode - d = Image._getdecoder(self.mode, "raw", rawmode) - d.setimage(self.im, self.state.extents()) - s = d.decode(data) - - if s[0] >= 0: - msg = "not enough image data" - raise ValueError(msg) - if s[1] != 0: - msg = "cannot decode image data" - raise ValueError(msg) - - -class PyEncoder(PyCodec): - """ - Python implementation of a format encoder. Override this class and - add the decoding logic in the :meth:`encode` method. - - See :ref:`Writing Your Own File Codec in Python` - """ - - _pushes_fd = False - - @property - def pushes_fd(self): - return self._pushes_fd - - def encode(self, bufsize): - """ - Override to perform the encoding process. - - :param bufsize: Buffer size. - :returns: A tuple of ``(bytes encoded, errcode, bytes)``. - If finished with encoding return 1 for the error code. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - raise NotImplementedError() - - def encode_to_pyfd(self): - """ - If ``pushes_fd`` is ``True``, then this method will be used, - and ``encode()`` will only be called once. - - :returns: A tuple of ``(bytes consumed, errcode)``. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - if not self.pushes_fd: - return 0, -8 # bad configuration - bytes_consumed, errcode, data = self.encode(0) - if data: - self.fd.write(data) - return bytes_consumed, errcode - - def encode_to_file(self, fh, bufsize): - """ - :param fh: File handle. - :param bufsize: Buffer size. - - :returns: If finished successfully, return 0. - Otherwise, return an error code. Err codes are from - :data:`.ImageFile.ERRORS`. - """ - errcode = 0 - while errcode == 0: - status, errcode, buf = self.encode(bufsize) - if status > 0: - fh.write(buf[status:]) - return errcode diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/openapi/docs.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/openapi/docs.py deleted file mode 100644 index 81f67dcc5bf59d32c7c8e59d5f345002d114a9ef..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/openapi/docs.py +++ /dev/null @@ -1,203 +0,0 @@ -import json -from typing import Any, Dict, Optional - -from fastapi.encoders import jsonable_encoder -from starlette.responses import HTMLResponse - -swagger_ui_default_parameters = { - "dom_id": "#swagger-ui", - "layout": "BaseLayout", - "deepLinking": True, - "showExtensions": True, - "showCommonExtensions": True, -} - - -def get_swagger_ui_html( - *, - openapi_url: str, - title: str, - swagger_js_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-bundle.js", - swagger_css_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css", - swagger_favicon_url: str = "https://fastapi.tiangolo.com/img/favicon.png", - oauth2_redirect_url: Optional[str] = None, - init_oauth: Optional[Dict[str, Any]] = None, - swagger_ui_parameters: Optional[Dict[str, Any]] = None, -) -> HTMLResponse: - current_swagger_ui_parameters = swagger_ui_default_parameters.copy() - if swagger_ui_parameters: - current_swagger_ui_parameters.update(swagger_ui_parameters) - - html = f""" - - - - - - {title} - - -
            -
            - - - - - - """ - return HTMLResponse(html) - - -def get_redoc_html( - *, - openapi_url: str, - title: str, - redoc_js_url: str = "https://cdn.jsdelivr.net/npm/redoc@next/bundles/redoc.standalone.js", - redoc_favicon_url: str = "https://fastapi.tiangolo.com/img/favicon.png", - with_google_fonts: bool = True, -) -> HTMLResponse: - html = f""" - - - - {title} - - - - """ - if with_google_fonts: - html += """ - - """ - html += f""" - - - - - - - - - - - """ - return HTMLResponse(html) - - -def get_swagger_ui_oauth2_redirect_html() -> HTMLResponse: - # copied from https://github.com/swagger-api/swagger-ui/blob/v4.14.0/dist/oauth2-redirect.html - html = """ - - - - Swagger UI: OAuth2 Redirect - - - - - - """ - return HTMLResponse(content=html) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-690664d1.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-690664d1.css deleted file mode 100644 index 858fdcc04577128b4960af9c51ca8c41e2fd69e4..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-690664d1.css +++ /dev/null @@ -1 +0,0 @@ -.wrap.svelte-1ck5uk8{display:flex;flex-direction:column;justify-content:center;min-height:var(--size-60);color:var(--block-label-text-color);line-height:var(--line-md)}.or.svelte-1ck5uk8{color:var(--body-text-color-subdued)}@media (min-width: 768px){.wrap.svelte-1ck5uk8{font-size:var(--text-lg)}} diff --git a/spaces/cihyFjudo/fairness-paper-search/Motorola Sbv5121 Voip Cable Modem Driver A User Guide for Single and Multiple Users.md b/spaces/cihyFjudo/fairness-paper-search/Motorola Sbv5121 Voip Cable Modem Driver A User Guide for Single and Multiple Users.md deleted file mode 100644 index eb69d727b8c1a8b38b3632a2a15de98b02787715..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Motorola Sbv5121 Voip Cable Modem Driver A User Guide for Single and Multiple Users.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Motorola Sbv5121 Voip Cable Modem Driver


            DOWNLOAD ››› https://tinurli.com/2uwjJB



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/dateutil/tz/tz.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/dateutil/tz/tz.py deleted file mode 100644 index c67f56d4659f17aab4540dfd42511bb850871a77..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/dateutil/tz/tz.py +++ /dev/null @@ -1,1849 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers timezone implementations subclassing the abstract -:py:class:`datetime.tzinfo` type. There are classes to handle tzfile format -files (usually are in :file:`/etc/localtime`, :file:`/usr/share/zoneinfo`, -etc), TZ environment string (in all known formats), given ranges (with help -from relative deltas), local machine timezone, fixed offset timezone, and UTC -timezone. -""" -import datetime -import struct -import time -import sys -import os -import bisect -import weakref -from collections import OrderedDict - -import six -from six import string_types -from six.moves import _thread -from ._common import tzname_in_python2, _tzinfo -from ._common import tzrangebase, enfold -from ._common import _validate_fromutc_inputs - -from ._factories import _TzSingleton, _TzOffsetFactory -from ._factories import _TzStrFactory -try: - from .win import tzwin, tzwinlocal -except ImportError: - tzwin = tzwinlocal = None - -# For warning about rounding tzinfo -from warnings import warn - -ZERO = datetime.timedelta(0) -EPOCH = datetime.datetime.utcfromtimestamp(0) -EPOCHORDINAL = EPOCH.toordinal() - - -@six.add_metaclass(_TzSingleton) -class tzutc(datetime.tzinfo): - """ - This is a tzinfo object that represents the UTC time zone. - - **Examples:** - - .. doctest:: - - >>> from datetime import * - >>> from dateutil.tz import * - - >>> datetime.now() - datetime.datetime(2003, 9, 27, 9, 40, 1, 521290) - - >>> datetime.now(tzutc()) - datetime.datetime(2003, 9, 27, 12, 40, 12, 156379, tzinfo=tzutc()) - - >>> datetime.now(tzutc()).tzname() - 'UTC' - - .. versionchanged:: 2.7.0 - ``tzutc()`` is now a singleton, so the result of ``tzutc()`` will - always return the same object. - - .. doctest:: - - >>> from dateutil.tz import tzutc, UTC - >>> tzutc() is tzutc() - True - >>> tzutc() is UTC - True - """ - def utcoffset(self, dt): - return ZERO - - def dst(self, dt): - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return "UTC" - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - return False - - @_validate_fromutc_inputs - def fromutc(self, dt): - """ - Fast track version of fromutc() returns the original ``dt`` object for - any valid :py:class:`datetime.datetime` object. - """ - return dt - - def __eq__(self, other): - if not isinstance(other, (tzutc, tzoffset)): - return NotImplemented - - return (isinstance(other, tzutc) or - (isinstance(other, tzoffset) and other._offset == ZERO)) - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s()" % self.__class__.__name__ - - __reduce__ = object.__reduce__ - - -#: Convenience constant providing a :class:`tzutc()` instance -#: -#: .. versionadded:: 2.7.0 -UTC = tzutc() - - -@six.add_metaclass(_TzOffsetFactory) -class tzoffset(datetime.tzinfo): - """ - A simple class for representing a fixed offset from UTC. - - :param name: - The timezone name, to be returned when ``tzname()`` is called. - :param offset: - The time zone offset in seconds, or (since version 2.6.0, represented - as a :py:class:`datetime.timedelta` object). - """ - def __init__(self, name, offset): - self._name = name - - try: - # Allow a timedelta - offset = offset.total_seconds() - except (TypeError, AttributeError): - pass - - self._offset = datetime.timedelta(seconds=_get_supported_offset(offset)) - - def utcoffset(self, dt): - return self._offset - - def dst(self, dt): - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return self._name - - @_validate_fromutc_inputs - def fromutc(self, dt): - return dt + self._offset - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - return False - - def __eq__(self, other): - if not isinstance(other, tzoffset): - return NotImplemented - - return self._offset == other._offset - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s(%s, %s)" % (self.__class__.__name__, - repr(self._name), - int(self._offset.total_seconds())) - - __reduce__ = object.__reduce__ - - -class tzlocal(_tzinfo): - """ - A :class:`tzinfo` subclass built around the ``time`` timezone functions. - """ - def __init__(self): - super(tzlocal, self).__init__() - - self._std_offset = datetime.timedelta(seconds=-time.timezone) - if time.daylight: - self._dst_offset = datetime.timedelta(seconds=-time.altzone) - else: - self._dst_offset = self._std_offset - - self._dst_saved = self._dst_offset - self._std_offset - self._hasdst = bool(self._dst_saved) - self._tznames = tuple(time.tzname) - - def utcoffset(self, dt): - if dt is None and self._hasdst: - return None - - if self._isdst(dt): - return self._dst_offset - else: - return self._std_offset - - def dst(self, dt): - if dt is None and self._hasdst: - return None - - if self._isdst(dt): - return self._dst_offset - self._std_offset - else: - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return self._tznames[self._isdst(dt)] - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - naive_dst = self._naive_is_dst(dt) - return (not naive_dst and - (naive_dst != self._naive_is_dst(dt - self._dst_saved))) - - def _naive_is_dst(self, dt): - timestamp = _datetime_to_timestamp(dt) - return time.localtime(timestamp + time.timezone).tm_isdst - - def _isdst(self, dt, fold_naive=True): - # We can't use mktime here. It is unstable when deciding if - # the hour near to a change is DST or not. - # - # timestamp = time.mktime((dt.year, dt.month, dt.day, dt.hour, - # dt.minute, dt.second, dt.weekday(), 0, -1)) - # return time.localtime(timestamp).tm_isdst - # - # The code above yields the following result: - # - # >>> import tz, datetime - # >>> t = tz.tzlocal() - # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() - # 'BRDT' - # >>> datetime.datetime(2003,2,16,0,tzinfo=t).tzname() - # 'BRST' - # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() - # 'BRST' - # >>> datetime.datetime(2003,2,15,22,tzinfo=t).tzname() - # 'BRDT' - # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() - # 'BRDT' - # - # Here is a more stable implementation: - # - if not self._hasdst: - return False - - # Check for ambiguous times: - dstval = self._naive_is_dst(dt) - fold = getattr(dt, 'fold', None) - - if self.is_ambiguous(dt): - if fold is not None: - return not self._fold(dt) - else: - return True - - return dstval - - def __eq__(self, other): - if isinstance(other, tzlocal): - return (self._std_offset == other._std_offset and - self._dst_offset == other._dst_offset) - elif isinstance(other, tzutc): - return (not self._hasdst and - self._tznames[0] in {'UTC', 'GMT'} and - self._std_offset == ZERO) - elif isinstance(other, tzoffset): - return (not self._hasdst and - self._tznames[0] == other._name and - self._std_offset == other._offset) - else: - return NotImplemented - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s()" % self.__class__.__name__ - - __reduce__ = object.__reduce__ - - -class _ttinfo(object): - __slots__ = ["offset", "delta", "isdst", "abbr", - "isstd", "isgmt", "dstoffset"] - - def __init__(self): - for attr in self.__slots__: - setattr(self, attr, None) - - def __repr__(self): - l = [] - for attr in self.__slots__: - value = getattr(self, attr) - if value is not None: - l.append("%s=%s" % (attr, repr(value))) - return "%s(%s)" % (self.__class__.__name__, ", ".join(l)) - - def __eq__(self, other): - if not isinstance(other, _ttinfo): - return NotImplemented - - return (self.offset == other.offset and - self.delta == other.delta and - self.isdst == other.isdst and - self.abbr == other.abbr and - self.isstd == other.isstd and - self.isgmt == other.isgmt and - self.dstoffset == other.dstoffset) - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __getstate__(self): - state = {} - for name in self.__slots__: - state[name] = getattr(self, name, None) - return state - - def __setstate__(self, state): - for name in self.__slots__: - if name in state: - setattr(self, name, state[name]) - - -class _tzfile(object): - """ - Lightweight class for holding the relevant transition and time zone - information read from binary tzfiles. - """ - attrs = ['trans_list', 'trans_list_utc', 'trans_idx', 'ttinfo_list', - 'ttinfo_std', 'ttinfo_dst', 'ttinfo_before', 'ttinfo_first'] - - def __init__(self, **kwargs): - for attr in self.attrs: - setattr(self, attr, kwargs.get(attr, None)) - - -class tzfile(_tzinfo): - """ - This is a ``tzinfo`` subclass that allows one to use the ``tzfile(5)`` - format timezone files to extract current and historical zone information. - - :param fileobj: - This can be an opened file stream or a file name that the time zone - information can be read from. - - :param filename: - This is an optional parameter specifying the source of the time zone - information in the event that ``fileobj`` is a file object. If omitted - and ``fileobj`` is a file stream, this parameter will be set either to - ``fileobj``'s ``name`` attribute or to ``repr(fileobj)``. - - See `Sources for Time Zone and Daylight Saving Time Data - `_ for more information. - Time zone files can be compiled from the `IANA Time Zone database files - `_ with the `zic time zone compiler - `_ - - .. note:: - - Only construct a ``tzfile`` directly if you have a specific timezone - file on disk that you want to read into a Python ``tzinfo`` object. - If you want to get a ``tzfile`` representing a specific IANA zone, - (e.g. ``'America/New_York'``), you should call - :func:`dateutil.tz.gettz` with the zone identifier. - - - **Examples:** - - Using the US Eastern time zone as an example, we can see that a ``tzfile`` - provides time zone information for the standard Daylight Saving offsets: - - .. testsetup:: tzfile - - from dateutil.tz import gettz - from datetime import datetime - - .. doctest:: tzfile - - >>> NYC = gettz('America/New_York') - >>> NYC - tzfile('/usr/share/zoneinfo/America/New_York') - - >>> print(datetime(2016, 1, 3, tzinfo=NYC)) # EST - 2016-01-03 00:00:00-05:00 - - >>> print(datetime(2016, 7, 7, tzinfo=NYC)) # EDT - 2016-07-07 00:00:00-04:00 - - - The ``tzfile`` structure contains a fully history of the time zone, - so historical dates will also have the right offsets. For example, before - the adoption of the UTC standards, New York used local solar mean time: - - .. doctest:: tzfile - - >>> print(datetime(1901, 4, 12, tzinfo=NYC)) # LMT - 1901-04-12 00:00:00-04:56 - - And during World War II, New York was on "Eastern War Time", which was a - state of permanent daylight saving time: - - .. doctest:: tzfile - - >>> print(datetime(1944, 2, 7, tzinfo=NYC)) # EWT - 1944-02-07 00:00:00-04:00 - - """ - - def __init__(self, fileobj, filename=None): - super(tzfile, self).__init__() - - file_opened_here = False - if isinstance(fileobj, string_types): - self._filename = fileobj - fileobj = open(fileobj, 'rb') - file_opened_here = True - elif filename is not None: - self._filename = filename - elif hasattr(fileobj, "name"): - self._filename = fileobj.name - else: - self._filename = repr(fileobj) - - if fileobj is not None: - if not file_opened_here: - fileobj = _nullcontext(fileobj) - - with fileobj as file_stream: - tzobj = self._read_tzfile(file_stream) - - self._set_tzdata(tzobj) - - def _set_tzdata(self, tzobj): - """ Set the time zone data of this object from a _tzfile object """ - # Copy the relevant attributes over as private attributes - for attr in _tzfile.attrs: - setattr(self, '_' + attr, getattr(tzobj, attr)) - - def _read_tzfile(self, fileobj): - out = _tzfile() - - # From tzfile(5): - # - # The time zone information files used by tzset(3) - # begin with the magic characters "TZif" to identify - # them as time zone information files, followed by - # sixteen bytes reserved for future use, followed by - # six four-byte values of type long, written in a - # ``standard'' byte order (the high-order byte - # of the value is written first). - if fileobj.read(4).decode() != "TZif": - raise ValueError("magic not found") - - fileobj.read(16) - - ( - # The number of UTC/local indicators stored in the file. - ttisgmtcnt, - - # The number of standard/wall indicators stored in the file. - ttisstdcnt, - - # The number of leap seconds for which data is - # stored in the file. - leapcnt, - - # The number of "transition times" for which data - # is stored in the file. - timecnt, - - # The number of "local time types" for which data - # is stored in the file (must not be zero). - typecnt, - - # The number of characters of "time zone - # abbreviation strings" stored in the file. - charcnt, - - ) = struct.unpack(">6l", fileobj.read(24)) - - # The above header is followed by tzh_timecnt four-byte - # values of type long, sorted in ascending order. - # These values are written in ``standard'' byte order. - # Each is used as a transition time (as returned by - # time(2)) at which the rules for computing local time - # change. - - if timecnt: - out.trans_list_utc = list(struct.unpack(">%dl" % timecnt, - fileobj.read(timecnt*4))) - else: - out.trans_list_utc = [] - - # Next come tzh_timecnt one-byte values of type unsigned - # char; each one tells which of the different types of - # ``local time'' types described in the file is associated - # with the same-indexed transition time. These values - # serve as indices into an array of ttinfo structures that - # appears next in the file. - - if timecnt: - out.trans_idx = struct.unpack(">%dB" % timecnt, - fileobj.read(timecnt)) - else: - out.trans_idx = [] - - # Each ttinfo structure is written as a four-byte value - # for tt_gmtoff of type long, in a standard byte - # order, followed by a one-byte value for tt_isdst - # and a one-byte value for tt_abbrind. In each - # structure, tt_gmtoff gives the number of - # seconds to be added to UTC, tt_isdst tells whether - # tm_isdst should be set by localtime(3), and - # tt_abbrind serves as an index into the array of - # time zone abbreviation characters that follow the - # ttinfo structure(s) in the file. - - ttinfo = [] - - for i in range(typecnt): - ttinfo.append(struct.unpack(">lbb", fileobj.read(6))) - - abbr = fileobj.read(charcnt).decode() - - # Then there are tzh_leapcnt pairs of four-byte - # values, written in standard byte order; the - # first value of each pair gives the time (as - # returned by time(2)) at which a leap second - # occurs; the second gives the total number of - # leap seconds to be applied after the given time. - # The pairs of values are sorted in ascending order - # by time. - - # Not used, for now (but seek for correct file position) - if leapcnt: - fileobj.seek(leapcnt * 8, os.SEEK_CUR) - - # Then there are tzh_ttisstdcnt standard/wall - # indicators, each stored as a one-byte value; - # they tell whether the transition times associated - # with local time types were specified as standard - # time or wall clock time, and are used when - # a time zone file is used in handling POSIX-style - # time zone environment variables. - - if ttisstdcnt: - isstd = struct.unpack(">%db" % ttisstdcnt, - fileobj.read(ttisstdcnt)) - - # Finally, there are tzh_ttisgmtcnt UTC/local - # indicators, each stored as a one-byte value; - # they tell whether the transition times associated - # with local time types were specified as UTC or - # local time, and are used when a time zone file - # is used in handling POSIX-style time zone envi- - # ronment variables. - - if ttisgmtcnt: - isgmt = struct.unpack(">%db" % ttisgmtcnt, - fileobj.read(ttisgmtcnt)) - - # Build ttinfo list - out.ttinfo_list = [] - for i in range(typecnt): - gmtoff, isdst, abbrind = ttinfo[i] - gmtoff = _get_supported_offset(gmtoff) - tti = _ttinfo() - tti.offset = gmtoff - tti.dstoffset = datetime.timedelta(0) - tti.delta = datetime.timedelta(seconds=gmtoff) - tti.isdst = isdst - tti.abbr = abbr[abbrind:abbr.find('\x00', abbrind)] - tti.isstd = (ttisstdcnt > i and isstd[i] != 0) - tti.isgmt = (ttisgmtcnt > i and isgmt[i] != 0) - out.ttinfo_list.append(tti) - - # Replace ttinfo indexes for ttinfo objects. - out.trans_idx = [out.ttinfo_list[idx] for idx in out.trans_idx] - - # Set standard, dst, and before ttinfos. before will be - # used when a given time is before any transitions, - # and will be set to the first non-dst ttinfo, or to - # the first dst, if all of them are dst. - out.ttinfo_std = None - out.ttinfo_dst = None - out.ttinfo_before = None - if out.ttinfo_list: - if not out.trans_list_utc: - out.ttinfo_std = out.ttinfo_first = out.ttinfo_list[0] - else: - for i in range(timecnt-1, -1, -1): - tti = out.trans_idx[i] - if not out.ttinfo_std and not tti.isdst: - out.ttinfo_std = tti - elif not out.ttinfo_dst and tti.isdst: - out.ttinfo_dst = tti - - if out.ttinfo_std and out.ttinfo_dst: - break - else: - if out.ttinfo_dst and not out.ttinfo_std: - out.ttinfo_std = out.ttinfo_dst - - for tti in out.ttinfo_list: - if not tti.isdst: - out.ttinfo_before = tti - break - else: - out.ttinfo_before = out.ttinfo_list[0] - - # Now fix transition times to become relative to wall time. - # - # I'm not sure about this. In my tests, the tz source file - # is setup to wall time, and in the binary file isstd and - # isgmt are off, so it should be in wall time. OTOH, it's - # always in gmt time. Let me know if you have comments - # about this. - lastdst = None - lastoffset = None - lastdstoffset = None - lastbaseoffset = None - out.trans_list = [] - - for i, tti in enumerate(out.trans_idx): - offset = tti.offset - dstoffset = 0 - - if lastdst is not None: - if tti.isdst: - if not lastdst: - dstoffset = offset - lastoffset - - if not dstoffset and lastdstoffset: - dstoffset = lastdstoffset - - tti.dstoffset = datetime.timedelta(seconds=dstoffset) - lastdstoffset = dstoffset - - # If a time zone changes its base offset during a DST transition, - # then you need to adjust by the previous base offset to get the - # transition time in local time. Otherwise you use the current - # base offset. Ideally, I would have some mathematical proof of - # why this is true, but I haven't really thought about it enough. - baseoffset = offset - dstoffset - adjustment = baseoffset - if (lastbaseoffset is not None and baseoffset != lastbaseoffset - and tti.isdst != lastdst): - # The base DST has changed - adjustment = lastbaseoffset - - lastdst = tti.isdst - lastoffset = offset - lastbaseoffset = baseoffset - - out.trans_list.append(out.trans_list_utc[i] + adjustment) - - out.trans_idx = tuple(out.trans_idx) - out.trans_list = tuple(out.trans_list) - out.trans_list_utc = tuple(out.trans_list_utc) - - return out - - def _find_last_transition(self, dt, in_utc=False): - # If there's no list, there are no transitions to find - if not self._trans_list: - return None - - timestamp = _datetime_to_timestamp(dt) - - # Find where the timestamp fits in the transition list - if the - # timestamp is a transition time, it's part of the "after" period. - trans_list = self._trans_list_utc if in_utc else self._trans_list - idx = bisect.bisect_right(trans_list, timestamp) - - # We want to know when the previous transition was, so subtract off 1 - return idx - 1 - - def _get_ttinfo(self, idx): - # For no list or after the last transition, default to _ttinfo_std - if idx is None or (idx + 1) >= len(self._trans_list): - return self._ttinfo_std - - # If there is a list and the time is before it, return _ttinfo_before - if idx < 0: - return self._ttinfo_before - - return self._trans_idx[idx] - - def _find_ttinfo(self, dt): - idx = self._resolve_ambiguous_time(dt) - - return self._get_ttinfo(idx) - - def fromutc(self, dt): - """ - The ``tzfile`` implementation of :py:func:`datetime.tzinfo.fromutc`. - - :param dt: - A :py:class:`datetime.datetime` object. - - :raises TypeError: - Raised if ``dt`` is not a :py:class:`datetime.datetime` object. - - :raises ValueError: - Raised if this is called with a ``dt`` which does not have this - ``tzinfo`` attached. - - :return: - Returns a :py:class:`datetime.datetime` object representing the - wall time in ``self``'s time zone. - """ - # These isinstance checks are in datetime.tzinfo, so we'll preserve - # them, even if we don't care about duck typing. - if not isinstance(dt, datetime.datetime): - raise TypeError("fromutc() requires a datetime argument") - - if dt.tzinfo is not self: - raise ValueError("dt.tzinfo is not self") - - # First treat UTC as wall time and get the transition we're in. - idx = self._find_last_transition(dt, in_utc=True) - tti = self._get_ttinfo(idx) - - dt_out = dt + datetime.timedelta(seconds=tti.offset) - - fold = self.is_ambiguous(dt_out, idx=idx) - - return enfold(dt_out, fold=int(fold)) - - def is_ambiguous(self, dt, idx=None): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - if idx is None: - idx = self._find_last_transition(dt) - - # Calculate the difference in offsets from current to previous - timestamp = _datetime_to_timestamp(dt) - tti = self._get_ttinfo(idx) - - if idx is None or idx <= 0: - return False - - od = self._get_ttinfo(idx - 1).offset - tti.offset - tt = self._trans_list[idx] # Transition time - - return timestamp < tt + od - - def _resolve_ambiguous_time(self, dt): - idx = self._find_last_transition(dt) - - # If we have no transitions, return the index - _fold = self._fold(dt) - if idx is None or idx == 0: - return idx - - # If it's ambiguous and we're in a fold, shift to a different index. - idx_offset = int(not _fold and self.is_ambiguous(dt, idx)) - - return idx - idx_offset - - def utcoffset(self, dt): - if dt is None: - return None - - if not self._ttinfo_std: - return ZERO - - return self._find_ttinfo(dt).delta - - def dst(self, dt): - if dt is None: - return None - - if not self._ttinfo_dst: - return ZERO - - tti = self._find_ttinfo(dt) - - if not tti.isdst: - return ZERO - - # The documentation says that utcoffset()-dst() must - # be constant for every dt. - return tti.dstoffset - - @tzname_in_python2 - def tzname(self, dt): - if not self._ttinfo_std or dt is None: - return None - return self._find_ttinfo(dt).abbr - - def __eq__(self, other): - if not isinstance(other, tzfile): - return NotImplemented - return (self._trans_list == other._trans_list and - self._trans_idx == other._trans_idx and - self._ttinfo_list == other._ttinfo_list) - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s(%s)" % (self.__class__.__name__, repr(self._filename)) - - def __reduce__(self): - return self.__reduce_ex__(None) - - def __reduce_ex__(self, protocol): - return (self.__class__, (None, self._filename), self.__dict__) - - -class tzrange(tzrangebase): - """ - The ``tzrange`` object is a time zone specified by a set of offsets and - abbreviations, equivalent to the way the ``TZ`` variable can be specified - in POSIX-like systems, but using Python delta objects to specify DST - start, end and offsets. - - :param stdabbr: - The abbreviation for standard time (e.g. ``'EST'``). - - :param stdoffset: - An integer or :class:`datetime.timedelta` object or equivalent - specifying the base offset from UTC. - - If unspecified, +00:00 is used. - - :param dstabbr: - The abbreviation for DST / "Summer" time (e.g. ``'EDT'``). - - If specified, with no other DST information, DST is assumed to occur - and the default behavior or ``dstoffset``, ``start`` and ``end`` is - used. If unspecified and no other DST information is specified, it - is assumed that this zone has no DST. - - If this is unspecified and other DST information is *is* specified, - DST occurs in the zone but the time zone abbreviation is left - unchanged. - - :param dstoffset: - A an integer or :class:`datetime.timedelta` object or equivalent - specifying the UTC offset during DST. If unspecified and any other DST - information is specified, it is assumed to be the STD offset +1 hour. - - :param start: - A :class:`relativedelta.relativedelta` object or equivalent specifying - the time and time of year that daylight savings time starts. To - specify, for example, that DST starts at 2AM on the 2nd Sunday in - March, pass: - - ``relativedelta(hours=2, month=3, day=1, weekday=SU(+2))`` - - If unspecified and any other DST information is specified, the default - value is 2 AM on the first Sunday in April. - - :param end: - A :class:`relativedelta.relativedelta` object or equivalent - representing the time and time of year that daylight savings time - ends, with the same specification method as in ``start``. One note is - that this should point to the first time in the *standard* zone, so if - a transition occurs at 2AM in the DST zone and the clocks are set back - 1 hour to 1AM, set the ``hours`` parameter to +1. - - - **Examples:** - - .. testsetup:: tzrange - - from dateutil.tz import tzrange, tzstr - - .. doctest:: tzrange - - >>> tzstr('EST5EDT') == tzrange("EST", -18000, "EDT") - True - - >>> from dateutil.relativedelta import * - >>> range1 = tzrange("EST", -18000, "EDT") - >>> range2 = tzrange("EST", -18000, "EDT", -14400, - ... relativedelta(hours=+2, month=4, day=1, - ... weekday=SU(+1)), - ... relativedelta(hours=+1, month=10, day=31, - ... weekday=SU(-1))) - >>> tzstr('EST5EDT') == range1 == range2 - True - - """ - def __init__(self, stdabbr, stdoffset=None, - dstabbr=None, dstoffset=None, - start=None, end=None): - - global relativedelta - from dateutil import relativedelta - - self._std_abbr = stdabbr - self._dst_abbr = dstabbr - - try: - stdoffset = stdoffset.total_seconds() - except (TypeError, AttributeError): - pass - - try: - dstoffset = dstoffset.total_seconds() - except (TypeError, AttributeError): - pass - - if stdoffset is not None: - self._std_offset = datetime.timedelta(seconds=stdoffset) - else: - self._std_offset = ZERO - - if dstoffset is not None: - self._dst_offset = datetime.timedelta(seconds=dstoffset) - elif dstabbr and stdoffset is not None: - self._dst_offset = self._std_offset + datetime.timedelta(hours=+1) - else: - self._dst_offset = ZERO - - if dstabbr and start is None: - self._start_delta = relativedelta.relativedelta( - hours=+2, month=4, day=1, weekday=relativedelta.SU(+1)) - else: - self._start_delta = start - - if dstabbr and end is None: - self._end_delta = relativedelta.relativedelta( - hours=+1, month=10, day=31, weekday=relativedelta.SU(-1)) - else: - self._end_delta = end - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = bool(self._start_delta) - - def transitions(self, year): - """ - For a given year, get the DST on and off transition times, expressed - always on the standard time side. For zones with no transitions, this - function returns ``None``. - - :param year: - The year whose transitions you would like to query. - - :return: - Returns a :class:`tuple` of :class:`datetime.datetime` objects, - ``(dston, dstoff)`` for zones with an annual DST transition, or - ``None`` for fixed offset zones. - """ - if not self.hasdst: - return None - - base_year = datetime.datetime(year, 1, 1) - - start = base_year + self._start_delta - end = base_year + self._end_delta - - return (start, end) - - def __eq__(self, other): - if not isinstance(other, tzrange): - return NotImplemented - - return (self._std_abbr == other._std_abbr and - self._dst_abbr == other._dst_abbr and - self._std_offset == other._std_offset and - self._dst_offset == other._dst_offset and - self._start_delta == other._start_delta and - self._end_delta == other._end_delta) - - @property - def _dst_base_offset(self): - return self._dst_base_offset_ - - -@six.add_metaclass(_TzStrFactory) -class tzstr(tzrange): - """ - ``tzstr`` objects are time zone objects specified by a time-zone string as - it would be passed to a ``TZ`` variable on POSIX-style systems (see - the `GNU C Library: TZ Variable`_ for more details). - - There is one notable exception, which is that POSIX-style time zones use an - inverted offset format, so normally ``GMT+3`` would be parsed as an offset - 3 hours *behind* GMT. The ``tzstr`` time zone object will parse this as an - offset 3 hours *ahead* of GMT. If you would like to maintain the POSIX - behavior, pass a ``True`` value to ``posix_offset``. - - The :class:`tzrange` object provides the same functionality, but is - specified using :class:`relativedelta.relativedelta` objects. rather than - strings. - - :param s: - A time zone string in ``TZ`` variable format. This can be a - :class:`bytes` (2.x: :class:`str`), :class:`str` (2.x: - :class:`unicode`) or a stream emitting unicode characters - (e.g. :class:`StringIO`). - - :param posix_offset: - Optional. If set to ``True``, interpret strings such as ``GMT+3`` or - ``UTC+3`` as being 3 hours *behind* UTC rather than ahead, per the - POSIX standard. - - .. caution:: - - Prior to version 2.7.0, this function also supported time zones - in the format: - - * ``EST5EDT,4,0,6,7200,10,0,26,7200,3600`` - * ``EST5EDT,4,1,0,7200,10,-1,0,7200,3600`` - - This format is non-standard and has been deprecated; this function - will raise a :class:`DeprecatedTZFormatWarning` until - support is removed in a future version. - - .. _`GNU C Library: TZ Variable`: - https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html - """ - def __init__(self, s, posix_offset=False): - global parser - from dateutil.parser import _parser as parser - - self._s = s - - res = parser._parsetz(s) - if res is None or res.any_unused_tokens: - raise ValueError("unknown string format") - - # Here we break the compatibility with the TZ variable handling. - # GMT-3 actually *means* the timezone -3. - if res.stdabbr in ("GMT", "UTC") and not posix_offset: - res.stdoffset *= -1 - - # We must initialize it first, since _delta() needs - # _std_offset and _dst_offset set. Use False in start/end - # to avoid building it two times. - tzrange.__init__(self, res.stdabbr, res.stdoffset, - res.dstabbr, res.dstoffset, - start=False, end=False) - - if not res.dstabbr: - self._start_delta = None - self._end_delta = None - else: - self._start_delta = self._delta(res.start) - if self._start_delta: - self._end_delta = self._delta(res.end, isend=1) - - self.hasdst = bool(self._start_delta) - - def _delta(self, x, isend=0): - from dateutil import relativedelta - kwargs = {} - if x.month is not None: - kwargs["month"] = x.month - if x.weekday is not None: - kwargs["weekday"] = relativedelta.weekday(x.weekday, x.week) - if x.week > 0: - kwargs["day"] = 1 - else: - kwargs["day"] = 31 - elif x.day: - kwargs["day"] = x.day - elif x.yday is not None: - kwargs["yearday"] = x.yday - elif x.jyday is not None: - kwargs["nlyearday"] = x.jyday - if not kwargs: - # Default is to start on first sunday of april, and end - # on last sunday of october. - if not isend: - kwargs["month"] = 4 - kwargs["day"] = 1 - kwargs["weekday"] = relativedelta.SU(+1) - else: - kwargs["month"] = 10 - kwargs["day"] = 31 - kwargs["weekday"] = relativedelta.SU(-1) - if x.time is not None: - kwargs["seconds"] = x.time - else: - # Default is 2AM. - kwargs["seconds"] = 7200 - if isend: - # Convert to standard time, to follow the documented way - # of working with the extra hour. See the documentation - # of the tzinfo class. - delta = self._dst_offset - self._std_offset - kwargs["seconds"] -= delta.seconds + delta.days * 86400 - return relativedelta.relativedelta(**kwargs) - - def __repr__(self): - return "%s(%s)" % (self.__class__.__name__, repr(self._s)) - - -class _tzicalvtzcomp(object): - def __init__(self, tzoffsetfrom, tzoffsetto, isdst, - tzname=None, rrule=None): - self.tzoffsetfrom = datetime.timedelta(seconds=tzoffsetfrom) - self.tzoffsetto = datetime.timedelta(seconds=tzoffsetto) - self.tzoffsetdiff = self.tzoffsetto - self.tzoffsetfrom - self.isdst = isdst - self.tzname = tzname - self.rrule = rrule - - -class _tzicalvtz(_tzinfo): - def __init__(self, tzid, comps=[]): - super(_tzicalvtz, self).__init__() - - self._tzid = tzid - self._comps = comps - self._cachedate = [] - self._cachecomp = [] - self._cache_lock = _thread.allocate_lock() - - def _find_comp(self, dt): - if len(self._comps) == 1: - return self._comps[0] - - dt = dt.replace(tzinfo=None) - - try: - with self._cache_lock: - return self._cachecomp[self._cachedate.index( - (dt, self._fold(dt)))] - except ValueError: - pass - - lastcompdt = None - lastcomp = None - - for comp in self._comps: - compdt = self._find_compdt(comp, dt) - - if compdt and (not lastcompdt or lastcompdt < compdt): - lastcompdt = compdt - lastcomp = comp - - if not lastcomp: - # RFC says nothing about what to do when a given - # time is before the first onset date. We'll look for the - # first standard component, or the first component, if - # none is found. - for comp in self._comps: - if not comp.isdst: - lastcomp = comp - break - else: - lastcomp = comp[0] - - with self._cache_lock: - self._cachedate.insert(0, (dt, self._fold(dt))) - self._cachecomp.insert(0, lastcomp) - - if len(self._cachedate) > 10: - self._cachedate.pop() - self._cachecomp.pop() - - return lastcomp - - def _find_compdt(self, comp, dt): - if comp.tzoffsetdiff < ZERO and self._fold(dt): - dt -= comp.tzoffsetdiff - - compdt = comp.rrule.before(dt, inc=True) - - return compdt - - def utcoffset(self, dt): - if dt is None: - return None - - return self._find_comp(dt).tzoffsetto - - def dst(self, dt): - comp = self._find_comp(dt) - if comp.isdst: - return comp.tzoffsetdiff - else: - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - return self._find_comp(dt).tzname - - def __repr__(self): - return "" % repr(self._tzid) - - __reduce__ = object.__reduce__ - - -class tzical(object): - """ - This object is designed to parse an iCalendar-style ``VTIMEZONE`` structure - as set out in `RFC 5545`_ Section 4.6.5 into one or more `tzinfo` objects. - - :param `fileobj`: - A file or stream in iCalendar format, which should be UTF-8 encoded - with CRLF endings. - - .. _`RFC 5545`: https://tools.ietf.org/html/rfc5545 - """ - def __init__(self, fileobj): - global rrule - from dateutil import rrule - - if isinstance(fileobj, string_types): - self._s = fileobj - # ical should be encoded in UTF-8 with CRLF - fileobj = open(fileobj, 'r') - else: - self._s = getattr(fileobj, 'name', repr(fileobj)) - fileobj = _nullcontext(fileobj) - - self._vtz = {} - - with fileobj as fobj: - self._parse_rfc(fobj.read()) - - def keys(self): - """ - Retrieves the available time zones as a list. - """ - return list(self._vtz.keys()) - - def get(self, tzid=None): - """ - Retrieve a :py:class:`datetime.tzinfo` object by its ``tzid``. - - :param tzid: - If there is exactly one time zone available, omitting ``tzid`` - or passing :py:const:`None` value returns it. Otherwise a valid - key (which can be retrieved from :func:`keys`) is required. - - :raises ValueError: - Raised if ``tzid`` is not specified but there are either more - or fewer than 1 zone defined. - - :returns: - Returns either a :py:class:`datetime.tzinfo` object representing - the relevant time zone or :py:const:`None` if the ``tzid`` was - not found. - """ - if tzid is None: - if len(self._vtz) == 0: - raise ValueError("no timezones defined") - elif len(self._vtz) > 1: - raise ValueError("more than one timezone available") - tzid = next(iter(self._vtz)) - - return self._vtz.get(tzid) - - def _parse_offset(self, s): - s = s.strip() - if not s: - raise ValueError("empty offset") - if s[0] in ('+', '-'): - signal = (-1, +1)[s[0] == '+'] - s = s[1:] - else: - signal = +1 - if len(s) == 4: - return (int(s[:2]) * 3600 + int(s[2:]) * 60) * signal - elif len(s) == 6: - return (int(s[:2]) * 3600 + int(s[2:4]) * 60 + int(s[4:])) * signal - else: - raise ValueError("invalid offset: " + s) - - def _parse_rfc(self, s): - lines = s.splitlines() - if not lines: - raise ValueError("empty string") - - # Unfold - i = 0 - while i < len(lines): - line = lines[i].rstrip() - if not line: - del lines[i] - elif i > 0 and line[0] == " ": - lines[i-1] += line[1:] - del lines[i] - else: - i += 1 - - tzid = None - comps = [] - invtz = False - comptype = None - for line in lines: - if not line: - continue - name, value = line.split(':', 1) - parms = name.split(';') - if not parms: - raise ValueError("empty property name") - name = parms[0].upper() - parms = parms[1:] - if invtz: - if name == "BEGIN": - if value in ("STANDARD", "DAYLIGHT"): - # Process component - pass - else: - raise ValueError("unknown component: "+value) - comptype = value - founddtstart = False - tzoffsetfrom = None - tzoffsetto = None - rrulelines = [] - tzname = None - elif name == "END": - if value == "VTIMEZONE": - if comptype: - raise ValueError("component not closed: "+comptype) - if not tzid: - raise ValueError("mandatory TZID not found") - if not comps: - raise ValueError( - "at least one component is needed") - # Process vtimezone - self._vtz[tzid] = _tzicalvtz(tzid, comps) - invtz = False - elif value == comptype: - if not founddtstart: - raise ValueError("mandatory DTSTART not found") - if tzoffsetfrom is None: - raise ValueError( - "mandatory TZOFFSETFROM not found") - if tzoffsetto is None: - raise ValueError( - "mandatory TZOFFSETFROM not found") - # Process component - rr = None - if rrulelines: - rr = rrule.rrulestr("\n".join(rrulelines), - compatible=True, - ignoretz=True, - cache=True) - comp = _tzicalvtzcomp(tzoffsetfrom, tzoffsetto, - (comptype == "DAYLIGHT"), - tzname, rr) - comps.append(comp) - comptype = None - else: - raise ValueError("invalid component end: "+value) - elif comptype: - if name == "DTSTART": - # DTSTART in VTIMEZONE takes a subset of valid RRULE - # values under RFC 5545. - for parm in parms: - if parm != 'VALUE=DATE-TIME': - msg = ('Unsupported DTSTART param in ' + - 'VTIMEZONE: ' + parm) - raise ValueError(msg) - rrulelines.append(line) - founddtstart = True - elif name in ("RRULE", "RDATE", "EXRULE", "EXDATE"): - rrulelines.append(line) - elif name == "TZOFFSETFROM": - if parms: - raise ValueError( - "unsupported %s parm: %s " % (name, parms[0])) - tzoffsetfrom = self._parse_offset(value) - elif name == "TZOFFSETTO": - if parms: - raise ValueError( - "unsupported TZOFFSETTO parm: "+parms[0]) - tzoffsetto = self._parse_offset(value) - elif name == "TZNAME": - if parms: - raise ValueError( - "unsupported TZNAME parm: "+parms[0]) - tzname = value - elif name == "COMMENT": - pass - else: - raise ValueError("unsupported property: "+name) - else: - if name == "TZID": - if parms: - raise ValueError( - "unsupported TZID parm: "+parms[0]) - tzid = value - elif name in ("TZURL", "LAST-MODIFIED", "COMMENT"): - pass - else: - raise ValueError("unsupported property: "+name) - elif name == "BEGIN" and value == "VTIMEZONE": - tzid = None - comps = [] - invtz = True - - def __repr__(self): - return "%s(%s)" % (self.__class__.__name__, repr(self._s)) - - -if sys.platform != "win32": - TZFILES = ["/etc/localtime", "localtime"] - TZPATHS = ["/usr/share/zoneinfo", - "/usr/lib/zoneinfo", - "/usr/share/lib/zoneinfo", - "/etc/zoneinfo"] -else: - TZFILES = [] - TZPATHS = [] - - -def __get_gettz(): - tzlocal_classes = (tzlocal,) - if tzwinlocal is not None: - tzlocal_classes += (tzwinlocal,) - - class GettzFunc(object): - """ - Retrieve a time zone object from a string representation - - This function is intended to retrieve the :py:class:`tzinfo` subclass - that best represents the time zone that would be used if a POSIX - `TZ variable`_ were set to the same value. - - If no argument or an empty string is passed to ``gettz``, local time - is returned: - - .. code-block:: python3 - - >>> gettz() - tzfile('/etc/localtime') - - This function is also the preferred way to map IANA tz database keys - to :class:`tzfile` objects: - - .. code-block:: python3 - - >>> gettz('Pacific/Kiritimati') - tzfile('/usr/share/zoneinfo/Pacific/Kiritimati') - - On Windows, the standard is extended to include the Windows-specific - zone names provided by the operating system: - - .. code-block:: python3 - - >>> gettz('Egypt Standard Time') - tzwin('Egypt Standard Time') - - Passing a GNU ``TZ`` style string time zone specification returns a - :class:`tzstr` object: - - .. code-block:: python3 - - >>> gettz('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3') - tzstr('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3') - - :param name: - A time zone name (IANA, or, on Windows, Windows keys), location of - a ``tzfile(5)`` zoneinfo file or ``TZ`` variable style time zone - specifier. An empty string, no argument or ``None`` is interpreted - as local time. - - :return: - Returns an instance of one of ``dateutil``'s :py:class:`tzinfo` - subclasses. - - .. versionchanged:: 2.7.0 - - After version 2.7.0, any two calls to ``gettz`` using the same - input strings will return the same object: - - .. code-block:: python3 - - >>> tz.gettz('America/Chicago') is tz.gettz('America/Chicago') - True - - In addition to improving performance, this ensures that - `"same zone" semantics`_ are used for datetimes in the same zone. - - - .. _`TZ variable`: - https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html - - .. _`"same zone" semantics`: - https://blog.ganssle.io/articles/2018/02/aware-datetime-arithmetic.html - """ - def __init__(self): - - self.__instances = weakref.WeakValueDictionary() - self.__strong_cache_size = 8 - self.__strong_cache = OrderedDict() - self._cache_lock = _thread.allocate_lock() - - def __call__(self, name=None): - with self._cache_lock: - rv = self.__instances.get(name, None) - - if rv is None: - rv = self.nocache(name=name) - if not (name is None - or isinstance(rv, tzlocal_classes) - or rv is None): - # tzlocal is slightly more complicated than the other - # time zone providers because it depends on environment - # at construction time, so don't cache that. - # - # We also cannot store weak references to None, so we - # will also not store that. - self.__instances[name] = rv - else: - # No need for strong caching, return immediately - return rv - - self.__strong_cache[name] = self.__strong_cache.pop(name, rv) - - if len(self.__strong_cache) > self.__strong_cache_size: - self.__strong_cache.popitem(last=False) - - return rv - - def set_cache_size(self, size): - with self._cache_lock: - self.__strong_cache_size = size - while len(self.__strong_cache) > size: - self.__strong_cache.popitem(last=False) - - def cache_clear(self): - with self._cache_lock: - self.__instances = weakref.WeakValueDictionary() - self.__strong_cache.clear() - - @staticmethod - def nocache(name=None): - """A non-cached version of gettz""" - tz = None - if not name: - try: - name = os.environ["TZ"] - except KeyError: - pass - if name is None or name in ("", ":"): - for filepath in TZFILES: - if not os.path.isabs(filepath): - filename = filepath - for path in TZPATHS: - filepath = os.path.join(path, filename) - if os.path.isfile(filepath): - break - else: - continue - if os.path.isfile(filepath): - try: - tz = tzfile(filepath) - break - except (IOError, OSError, ValueError): - pass - else: - tz = tzlocal() - else: - try: - if name.startswith(":"): - name = name[1:] - except TypeError as e: - if isinstance(name, bytes): - new_msg = "gettz argument should be str, not bytes" - six.raise_from(TypeError(new_msg), e) - else: - raise - if os.path.isabs(name): - if os.path.isfile(name): - tz = tzfile(name) - else: - tz = None - else: - for path in TZPATHS: - filepath = os.path.join(path, name) - if not os.path.isfile(filepath): - filepath = filepath.replace(' ', '_') - if not os.path.isfile(filepath): - continue - try: - tz = tzfile(filepath) - break - except (IOError, OSError, ValueError): - pass - else: - tz = None - if tzwin is not None: - try: - tz = tzwin(name) - except (WindowsError, UnicodeEncodeError): - # UnicodeEncodeError is for Python 2.7 compat - tz = None - - if not tz: - from dateutil.zoneinfo import get_zonefile_instance - tz = get_zonefile_instance().get(name) - - if not tz: - for c in name: - # name is not a tzstr unless it has at least - # one offset. For short values of "name", an - # explicit for loop seems to be the fastest way - # To determine if a string contains a digit - if c in "0123456789": - try: - tz = tzstr(name) - except ValueError: - pass - break - else: - if name in ("GMT", "UTC"): - tz = UTC - elif name in time.tzname: - tz = tzlocal() - return tz - - return GettzFunc() - - -gettz = __get_gettz() -del __get_gettz - - -def datetime_exists(dt, tz=None): - """ - Given a datetime and a time zone, determine whether or not a given datetime - would fall in a gap. - - :param dt: - A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` - is provided.) - - :param tz: - A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If - ``None`` or not provided, the datetime's own time zone will be used. - - :return: - Returns a boolean value whether or not the "wall time" exists in - ``tz``. - - .. versionadded:: 2.7.0 - """ - if tz is None: - if dt.tzinfo is None: - raise ValueError('Datetime is naive and no time zone provided.') - tz = dt.tzinfo - - dt = dt.replace(tzinfo=None) - - # This is essentially a test of whether or not the datetime can survive - # a round trip to UTC. - dt_rt = dt.replace(tzinfo=tz).astimezone(UTC).astimezone(tz) - dt_rt = dt_rt.replace(tzinfo=None) - - return dt == dt_rt - - -def datetime_ambiguous(dt, tz=None): - """ - Given a datetime and a time zone, determine whether or not a given datetime - is ambiguous (i.e if there are two times differentiated only by their DST - status). - - :param dt: - A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` - is provided.) - - :param tz: - A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If - ``None`` or not provided, the datetime's own time zone will be used. - - :return: - Returns a boolean value whether or not the "wall time" is ambiguous in - ``tz``. - - .. versionadded:: 2.6.0 - """ - if tz is None: - if dt.tzinfo is None: - raise ValueError('Datetime is naive and no time zone provided.') - - tz = dt.tzinfo - - # If a time zone defines its own "is_ambiguous" function, we'll use that. - is_ambiguous_fn = getattr(tz, 'is_ambiguous', None) - if is_ambiguous_fn is not None: - try: - return tz.is_ambiguous(dt) - except Exception: - pass - - # If it doesn't come out and tell us it's ambiguous, we'll just check if - # the fold attribute has any effect on this particular date and time. - dt = dt.replace(tzinfo=tz) - wall_0 = enfold(dt, fold=0) - wall_1 = enfold(dt, fold=1) - - same_offset = wall_0.utcoffset() == wall_1.utcoffset() - same_dst = wall_0.dst() == wall_1.dst() - - return not (same_offset and same_dst) - - -def resolve_imaginary(dt): - """ - Given a datetime that may be imaginary, return an existing datetime. - - This function assumes that an imaginary datetime represents what the - wall time would be in a zone had the offset transition not occurred, so - it will always fall forward by the transition's change in offset. - - .. doctest:: - - >>> from dateutil import tz - >>> from datetime import datetime - >>> NYC = tz.gettz('America/New_York') - >>> print(tz.resolve_imaginary(datetime(2017, 3, 12, 2, 30, tzinfo=NYC))) - 2017-03-12 03:30:00-04:00 - - >>> KIR = tz.gettz('Pacific/Kiritimati') - >>> print(tz.resolve_imaginary(datetime(1995, 1, 1, 12, 30, tzinfo=KIR))) - 1995-01-02 12:30:00+14:00 - - As a note, :func:`datetime.astimezone` is guaranteed to produce a valid, - existing datetime, so a round-trip to and from UTC is sufficient to get - an extant datetime, however, this generally "falls back" to an earlier time - rather than falling forward to the STD side (though no guarantees are made - about this behavior). - - :param dt: - A :class:`datetime.datetime` which may or may not exist. - - :return: - Returns an existing :class:`datetime.datetime`. If ``dt`` was not - imaginary, the datetime returned is guaranteed to be the same object - passed to the function. - - .. versionadded:: 2.7.0 - """ - if dt.tzinfo is not None and not datetime_exists(dt): - - curr_offset = (dt + datetime.timedelta(hours=24)).utcoffset() - old_offset = (dt - datetime.timedelta(hours=24)).utcoffset() - - dt += curr_offset - old_offset - - return dt - - -def _datetime_to_timestamp(dt): - """ - Convert a :class:`datetime.datetime` object to an epoch timestamp in - seconds since January 1, 1970, ignoring the time zone. - """ - return (dt.replace(tzinfo=None) - EPOCH).total_seconds() - - -if sys.version_info >= (3, 6): - def _get_supported_offset(second_offset): - return second_offset -else: - def _get_supported_offset(second_offset): - # For python pre-3.6, round to full-minutes if that's not the case. - # Python's datetime doesn't accept sub-minute timezones. Check - # http://python.org/sf/1447945 or https://bugs.python.org/issue5288 - # for some information. - old_offset = second_offset - calculated_offset = 60 * ((second_offset + 30) // 60) - return calculated_offset - - -try: - # Python 3.7 feature - from contextlib import nullcontext as _nullcontext -except ImportError: - class _nullcontext(object): - """ - Class for wrapping contexts so that they are passed through in a - with statement. - """ - def __init__(self, context): - self.context = context - - def __enter__(self): - return self.context - - def __exit__(*args, **kwargs): - pass - -# vim:ts=4:sw=4:et diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac_dwt_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac_dwt_template.c deleted file mode 100644 index 0d39754ed83678494bc1258dea6e2b0a607dd555..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac_dwt_template.c +++ /dev/null @@ -1,608 +0,0 @@ -/* - * Copyright (C) 2004-2010 Michael Niedermayer - * Copyright (C) 2008 David Conrad - * Copyright (C) 2015 Open Broadcast Systems Ltd. - * Author (C) 2015 Rostislav Pehlivanov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#if defined(TEMPLATE_8bit) - -# define RENAME(N) N ## _8bit -# define TYPE int16_t -# undef TEMPLATE_8bit - -#elif defined(TEMPLATE_10bit) - -# define RENAME(N) N ## _10bit -# define TYPE int32_t -# undef TEMPLATE_10bit - -#elif defined(TEMPLATE_12bit) - -# define RENAME(N) N ## _12bit -# define TYPE int32_t -# undef TEMPLATE_12bit - -#endif - -static void RENAME(vertical_compose53iL0)(uint8_t *_b0, uint8_t *_b1, uint8_t *_b2, - int width) -{ - int i; - TYPE *b0 = (TYPE *)_b0; - TYPE *b1 = (TYPE *)_b1; - TYPE *b2 = (TYPE *)_b2; - for (i = 0; i < width; i++) - b1[i] -= (unsigned)((int)(b0[i] + (unsigned)b2[i] + 2) >> 2); -} - -static av_always_inline void RENAME(interleave)(TYPE *dst, TYPE *src0, TYPE *src1, int w2, - int add, int shift) -{ - int i; - for (i = 0; i < w2; i++) { - dst[2*i ] = ((int)(src0[i] + (unsigned)add)) >> shift; - dst[2*i+1] = ((int)(src1[i] + (unsigned)add)) >> shift; - } -} - -static void RENAME(horizontal_compose_dirac53i)(uint8_t *_b, uint8_t *_temp, int w) -{ - int x; - const int w2 = w >> 1; - TYPE *b = (TYPE *)_b; - TYPE *temp = (TYPE *)_temp; - - temp[0] = COMPOSE_53iL0(b[w2], b[0], b[w2]); - for (x = 1; x < w2; x++) { - temp[x ] = COMPOSE_53iL0 (b[x+w2-1], b[x ], b[x+w2]); - temp[x+w2-1] = COMPOSE_DIRAC53iH0(temp[x-1], b[x+w2-1], temp[x]); - } - temp[w-1] = COMPOSE_DIRAC53iH0(temp[w2-1], b[w-1], temp[w2-1]); - - RENAME(interleave)(b, temp, temp+w2, w2, 1, 1); -} - -static void RENAME(horizontal_compose_dd97i)(uint8_t *_b, uint8_t *_tmp, int w) -{ - int x; - const int w2 = w >> 1; - TYPE *b = (TYPE *)_b; - TYPE *tmp = (TYPE *)_tmp; - - tmp[0] = COMPOSE_53iL0(b[w2], b[0], b[w2]); - for (x = 1; x < w2; x++) - tmp[x] = COMPOSE_53iL0(b[x+w2-1], b[x], b[x+w2]); - - // extend the edges - tmp[-1] = tmp[0]; - tmp[w2+1] = tmp[w2] = tmp[w2-1]; - - for (x = 0; x < w2; x++) { - b[2*x ] = ((int)(tmp[x] + 1U))>>1; - b[2*x+1] = ((int)(COMPOSE_DD97iH0(tmp[x-1], tmp[x], b[x+w2], tmp[x+1], tmp[x+2]) + 1U))>>1; - } -} - -static void RENAME(horizontal_compose_dd137i)(uint8_t *_b, uint8_t *_tmp, int w) -{ - const int w2 = w >> 1; - int x; - TYPE *b = (TYPE *)_b; - TYPE *tmp = (TYPE *)_tmp; - - tmp[0] = COMPOSE_DD137iL0(b[w2], b[w2], b[0], b[w2 ], b[w2+1]); - tmp[1] = COMPOSE_DD137iL0(b[w2], b[w2], b[1], b[w2+1], b[w2+2]); - for (x = 2; x < w2-1; x++) - tmp[x] = COMPOSE_DD137iL0(b[x+w2-2], b[x+w2-1], b[x], b[x+w2], b[x+w2+1]); - tmp[w2-1] = COMPOSE_DD137iL0(b[w-3], b[w-2], b[w2-1], b[w-1], b[w-1]); - - // extend the edges - tmp[-1] = tmp[0]; - tmp[w2+1] = tmp[w2] = tmp[w2-1]; - - for (x = 0; x < w2; x++) { - b[2*x ] = ((int)(tmp[x] + 1U))>>1; - b[2*x+1] = ((int)(COMPOSE_DD97iH0(tmp[x-1], tmp[x], b[x+w2], tmp[x+1], tmp[x+2]) + 1U))>>1; - } -} - -static av_always_inline void RENAME(horizontal_compose_haari)(TYPE *b, TYPE *temp, - int w, int shift) -{ - const int w2 = w >> 1; - int x; - - for (x = 0; x < w2; x++) { - temp[x ] = COMPOSE_HAARiL0(b[x ], b[x+w2]); - temp[x+w2] = COMPOSE_HAARiH0(b[x+w2], temp[x]); - } - - RENAME(interleave)(b, temp, temp+w2, w2, shift, shift); -} - -static void RENAME(horizontal_compose_haar0i)(uint8_t *_b, uint8_t *_temp, int w) -{ - TYPE *b = (TYPE *)_b; - TYPE *temp = (TYPE *)_temp; - RENAME(horizontal_compose_haari)(b, temp, w, 0); -} - -static void RENAME(horizontal_compose_haar1i)(uint8_t *_b, uint8_t *_temp, int w) -{ - TYPE *b = (TYPE *)_b; - TYPE *temp = (TYPE *)_temp; - RENAME(horizontal_compose_haari)(b, temp, w, 1); -} - -static void RENAME(horizontal_compose_fidelityi)(uint8_t *_b, uint8_t *_tmp, int w) -{ - const int w2 = w >> 1; - int i, x; - TYPE v[8]; - TYPE *b = (TYPE *)_b; - TYPE *tmp = (TYPE *)_tmp; - - for (x = 0; x < w2; x++) { - for (i = 0; i < 8; i++) - v[i] = b[av_clip(x-3+i, 0, w2-1)]; - tmp[x] = COMPOSE_FIDELITYiH0(v[0], v[1], v[2], v[3], b[x+w2], v[4], v[5], v[6], v[7]); - } - - for (x = 0; x < w2; x++) { - for (i = 0; i < 8; i++) - v[i] = tmp[av_clip(x-4+i, 0, w2-1)]; - tmp[x+w2] = COMPOSE_FIDELITYiL0(v[0], v[1], v[2], v[3], b[x], v[4], v[5], v[6], v[7]); - } - - RENAME(interleave)(b, tmp+w2, tmp, w2, 0, 0); -} - -static void RENAME(horizontal_compose_daub97i)(uint8_t *_b, uint8_t *_temp, int w) -{ - const int w2 = w >> 1; - int x, b0, b1, b2; - TYPE *b = (TYPE *)_b; - TYPE *temp = (TYPE *)_temp; - - temp[0] = COMPOSE_DAUB97iL1(b[w2], b[0], b[w2]); - for (x = 1; x < w2; x++) { - temp[x ] = COMPOSE_DAUB97iL1(b[x+w2-1], b[x ], b[x+w2]); - temp[x+w2-1] = COMPOSE_DAUB97iH1(temp[x-1], b[x+w2-1], temp[x]); - } - temp[w-1] = COMPOSE_DAUB97iH1(temp[w2-1], b[w-1], temp[w2-1]); - - // second stage combined with interleave and shift - b0 = b2 = COMPOSE_DAUB97iL0(temp[w2], temp[0], temp[w2]); - b[0] = ~((~b0) >> 1); - for (x = 1; x < w2; x++) { - b2 = COMPOSE_DAUB97iL0(temp[x+w2-1], temp[x ], temp[x+w2]); - b1 = COMPOSE_DAUB97iH0( b0, temp[x+w2-1], b2 ); - b[2*x-1] = ~((~b1) >> 1); - b[2*x ] = ~((~b2) >> 1); - b0 = b2; - } - b[w-1] = ~((~COMPOSE_DAUB97iH0(b2, temp[w-1], b2)) >> 1); -} - -static void RENAME(vertical_compose_dirac53iH0)(uint8_t *_b0, uint8_t *_b1, uint8_t *_b2, - int width) -{ - int i; - TYPE *b0 = (TYPE *)_b0; - TYPE *b1 = (TYPE *)_b1; - TYPE *b2 = (TYPE *)_b2; - for(i=0; ivertical_compose_l0.tap3; - vertical_compose_5tap vertical_compose_h0 = d->vertical_compose_h0.tap5; - DWTCompose *cs = d->cs + level; - - int i, y = cs->y; - uint8_t *b[8]; - for (i = 0; i < 6; i++) - b[i] = cs->b[i]; - b[6] = d->buffer + av_clip(y+5, 0, height-2)*stride; - b[7] = d->buffer + av_clip(y+6, 1, height-1)*stride; - - if(y+5<(unsigned)height) vertical_compose_l0( b[5], b[6], b[7], width); - if(y+1<(unsigned)height) vertical_compose_h0(b[0], b[2], b[3], b[4], b[6], width); - - if(y-1<(unsigned)height) d->horizontal_compose(b[0], d->temp, width); - if(y+0<(unsigned)height) d->horizontal_compose(b[1], d->temp, width); - - for (i = 0; i < 6; i++) - cs->b[i] = b[i+2]; - cs->y += 2; -} - -static void RENAME(spatial_compose_dirac53i_dy)(DWTContext *d, int level, int width, int height, int stride) -{ - vertical_compose_3tap vertical_compose_l0 = d->vertical_compose_l0.tap3; - vertical_compose_3tap vertical_compose_h0 = d->vertical_compose_h0.tap3; - DWTCompose *cs = d->cs + level; - - int y= cs->y; - uint8_t *b[4] = { cs->b[0], cs->b[1] }; - b[2] = d->buffer + avpriv_mirror(y+1, height-1)*stride; - b[3] = d->buffer + avpriv_mirror(y+2, height-1)*stride; - - if(y+1<(unsigned)height) vertical_compose_l0(b[1], b[2], b[3], width); - if(y+0<(unsigned)height) vertical_compose_h0(b[0], b[1], b[2], width); - - if(y-1<(unsigned)height) d->horizontal_compose(b[0], d->temp, width); - if(y+0<(unsigned)height) d->horizontal_compose(b[1], d->temp, width); - - cs->b[0] = b[2]; - cs->b[1] = b[3]; - cs->y += 2; -} - -static void RENAME(spatial_compose_dd137i_dy)(DWTContext *d, int level, int width, int height, int stride) -{ - vertical_compose_5tap vertical_compose_l0 = d->vertical_compose_l0.tap5; - vertical_compose_5tap vertical_compose_h0 = d->vertical_compose_h0.tap5; - DWTCompose *cs = d->cs + level; - - int i, y = cs->y; - uint8_t *b[10]; - for (i = 0; i < 8; i++) - b[i] = cs->b[i]; - b[8] = d->buffer + av_clip(y+7, 0, height-2)*stride; - b[9] = d->buffer + av_clip(y+8, 1, height-1)*stride; - - if(y+5<(unsigned)height) vertical_compose_l0(b[3], b[5], b[6], b[7], b[9], width); - if(y+1<(unsigned)height) vertical_compose_h0(b[0], b[2], b[3], b[4], b[6], width); - - if(y-1<(unsigned)height) d->horizontal_compose(b[0], d->temp, width); - if(y+0<(unsigned)height) d->horizontal_compose(b[1], d->temp, width); - - for (i = 0; i < 8; i++) - cs->b[i] = b[i+2]; - cs->y += 2; -} - -// haar makes the assumption that height is even (always true for dirac) -static void RENAME(spatial_compose_haari_dy)(DWTContext *d, int level, int width, int height, int stride) -{ - vertical_compose_2tap vertical_compose = d->vertical_compose; - int y = d->cs[level].y; - uint8_t *b0 = d->buffer + (y-1)*stride; - uint8_t *b1 = d->buffer + (y )*stride; - - vertical_compose(b0, b1, width); - d->horizontal_compose(b0, d->temp, width); - d->horizontal_compose(b1, d->temp, width); - - d->cs[level].y += 2; -} - -// Don't do sliced idwt for fidelity; the 9 tap filter makes it a bit annoying -// Fortunately, this filter isn't used in practice. -static void RENAME(spatial_compose_fidelity)(DWTContext *d, int level, int width, int height, int stride) -{ - vertical_compose_9tap vertical_compose_l0 = d->vertical_compose_l0.tap9; - vertical_compose_9tap vertical_compose_h0 = d->vertical_compose_h0.tap9; - int i, y; - uint8_t *b[8]; - - for (y = 1; y < height; y += 2) { - for (i = 0; i < 8; i++) - b[i] = d->buffer + av_clip((y-7 + 2*i), 0, height-2)*stride; - vertical_compose_h0(d->buffer + y*stride, b, width); - } - - for (y = 0; y < height; y += 2) { - for (i = 0; i < 8; i++) - b[i] = d->buffer + av_clip((y-7 + 2*i), 1, height-1)*stride; - vertical_compose_l0(d->buffer + y*stride, b, width); - } - - for (y = 0; y < height; y++) - d->horizontal_compose(d->buffer + y*stride, d->temp, width); - - d->cs[level].y = height+1; -} - -static void RENAME(spatial_compose_daub97i_dy)(DWTContext *d, int level, int width, int height, int stride) -{ - vertical_compose_3tap vertical_compose_l0 = d->vertical_compose_l0.tap3; - vertical_compose_3tap vertical_compose_h0 = d->vertical_compose_h0.tap3; - vertical_compose_3tap vertical_compose_l1 = d->vertical_compose_l1; - vertical_compose_3tap vertical_compose_h1 = d->vertical_compose_h1; - DWTCompose *cs = d->cs + level; - - int i, y = cs->y; - uint8_t *b[6]; - for (i = 0; i < 4; i++) - b[i] = cs->b[i]; - b[4] = d->buffer + avpriv_mirror(y+3, height-1)*stride; - b[5] = d->buffer + avpriv_mirror(y+4, height-1)*stride; - - if(y+3<(unsigned)height) vertical_compose_l1(b[3], b[4], b[5], width); - if(y+2<(unsigned)height) vertical_compose_h1(b[2], b[3], b[4], width); - if(y+1<(unsigned)height) vertical_compose_l0(b[1], b[2], b[3], width); - if(y+0<(unsigned)height) vertical_compose_h0(b[0], b[1], b[2], width); - - if(y-1<(unsigned)height) d->horizontal_compose(b[0], d->temp, width); - if(y+0<(unsigned)height) d->horizontal_compose(b[1], d->temp, width); - - for (i = 0; i < 4; i++) - cs->b[i] = b[i+2]; - cs->y += 2; -} - -static void RENAME(spatial_compose97i_init)(DWTCompose *cs, uint8_t *buffer, int height, int stride) -{ - cs->b[0] = buffer + avpriv_mirror(-3-1, height-1)*stride; - cs->b[1] = buffer + avpriv_mirror(-3 , height-1)*stride; - cs->b[2] = buffer + avpriv_mirror(-3+1, height-1)*stride; - cs->b[3] = buffer + avpriv_mirror(-3+2, height-1)*stride; - cs->y = -3; -} - -static void RENAME(spatial_compose53i_init)(DWTCompose *cs, uint8_t *buffer, int height, int stride) -{ - cs->b[0] = buffer + avpriv_mirror(-1-1, height-1)*stride; - cs->b[1] = buffer + avpriv_mirror(-1 , height-1)*stride; - cs->y = -1; -} - -static void RENAME(spatial_compose_dd97i_init)(DWTCompose *cs, uint8_t *buffer, int height, int stride) -{ - cs->b[0] = buffer + av_clip(-5-1, 0, height-2)*stride; - cs->b[1] = buffer + av_clip(-5 , 1, height-1)*stride; - cs->b[2] = buffer + av_clip(-5+1, 0, height-2)*stride; - cs->b[3] = buffer + av_clip(-5+2, 1, height-1)*stride; - cs->b[4] = buffer + av_clip(-5+3, 0, height-2)*stride; - cs->b[5] = buffer + av_clip(-5+4, 1, height-1)*stride; - cs->y = -5; -} - -static void RENAME(spatial_compose_dd137i_init)(DWTCompose *cs, uint8_t *buffer, int height, int stride) -{ - cs->b[0] = buffer + av_clip(-5-1, 0, height-2)*stride; - cs->b[1] = buffer + av_clip(-5 , 1, height-1)*stride; - cs->b[2] = buffer + av_clip(-5+1, 0, height-2)*stride; - cs->b[3] = buffer + av_clip(-5+2, 1, height-1)*stride; - cs->b[4] = buffer + av_clip(-5+3, 0, height-2)*stride; - cs->b[5] = buffer + av_clip(-5+4, 1, height-1)*stride; - cs->b[6] = buffer + av_clip(-5+5, 0, height-2)*stride; - cs->b[7] = buffer + av_clip(-5+6, 1, height-1)*stride; - cs->y = -5; -} - -static int RENAME(spatial_idwt_init)(DWTContext *d, enum dwt_type type) -{ - int level; - - d->temp = (uint8_t *)(((TYPE *)d->temp) + 8); - - for (level = d->decomposition_count - 1; level >= 0; level--){ - int hl = d->height >> level; - int stride_l = d->stride << level; - - switch(type){ - case DWT_DIRAC_DD9_7: - RENAME(spatial_compose_dd97i_init)(d->cs+level, d->buffer, hl, stride_l); - break; - case DWT_DIRAC_LEGALL5_3: - RENAME(spatial_compose53i_init)(d->cs+level, d->buffer, hl, stride_l); - break; - case DWT_DIRAC_DD13_7: - RENAME(spatial_compose_dd137i_init)(d->cs+level, d->buffer, hl, stride_l); - break; - case DWT_DIRAC_HAAR0: - case DWT_DIRAC_HAAR1: - d->cs[level].y = 1; - break; - case DWT_DIRAC_DAUB9_7: - RENAME(spatial_compose97i_init)(d->cs+level, d->buffer, hl, stride_l); - break; - default: - d->cs[level].y = 0; - break; - } - } - - switch (type) { - case DWT_DIRAC_DD9_7: - d->spatial_compose = RENAME(spatial_compose_dd97i_dy); - d->vertical_compose_l0.tap3 = RENAME(vertical_compose53iL0); - d->vertical_compose_h0.tap5 = RENAME(vertical_compose_dd97iH0); - d->horizontal_compose = RENAME(horizontal_compose_dd97i); - d->support = 7; - break; - case DWT_DIRAC_LEGALL5_3: - d->spatial_compose = RENAME(spatial_compose_dirac53i_dy); - d->vertical_compose_l0.tap3 = RENAME(vertical_compose53iL0); - d->vertical_compose_h0.tap3 = RENAME(vertical_compose_dirac53iH0); - d->horizontal_compose = RENAME(horizontal_compose_dirac53i); - d->support = 3; - break; - case DWT_DIRAC_DD13_7: - d->spatial_compose = RENAME(spatial_compose_dd137i_dy); - d->vertical_compose_l0.tap5 = RENAME(vertical_compose_dd137iL0); - d->vertical_compose_h0.tap5 = RENAME(vertical_compose_dd97iH0); - d->horizontal_compose = RENAME(horizontal_compose_dd137i); - d->support = 7; - break; - case DWT_DIRAC_HAAR0: - case DWT_DIRAC_HAAR1: - d->spatial_compose = RENAME(spatial_compose_haari_dy); - d->vertical_compose = RENAME(vertical_compose_haar); - if (type == DWT_DIRAC_HAAR0) - d->horizontal_compose = RENAME(horizontal_compose_haar0i); - else - d->horizontal_compose = RENAME(horizontal_compose_haar1i); - d->support = 1; - break; - case DWT_DIRAC_FIDELITY: - d->spatial_compose = RENAME(spatial_compose_fidelity); - d->vertical_compose_l0.tap9 = RENAME(vertical_compose_fidelityiL0); - d->vertical_compose_h0.tap9 = RENAME(vertical_compose_fidelityiH0); - d->horizontal_compose = RENAME(horizontal_compose_fidelityi); - d->support = 0; // not really used - break; - case DWT_DIRAC_DAUB9_7: - d->spatial_compose = RENAME(spatial_compose_daub97i_dy); - d->vertical_compose_l0.tap3 = RENAME(vertical_compose_daub97iL0); - d->vertical_compose_h0.tap3 = RENAME(vertical_compose_daub97iH0); - d->vertical_compose_l1 = RENAME(vertical_compose_daub97iL1); - d->vertical_compose_h1 = RENAME(vertical_compose_daub97iH1); - d->horizontal_compose = RENAME(horizontal_compose_daub97i); - d->support = 5; - break; - default: - return AVERROR_INVALIDDATA; - } - - return 0; -} - -#undef RENAME -#undef TYPE diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaomenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaomenc.c deleted file mode 100644 index 0b88102c77573b1da099e7df08d9145db00dc57d..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaomenc.c +++ /dev/null @@ -1,1563 +0,0 @@ -/* - * Copyright (c) 2010, Google, Inc. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * AV1 encoder support via libaom - */ - -#include - -#define AOM_DISABLE_CTRL_TYPECHECKS 1 -#include -#include - -#include "libavutil/avassert.h" -#include "libavutil/base64.h" -#include "libavutil/common.h" -#include "libavutil/cpu.h" -#include "libavutil/imgutils.h" -#include "libavutil/mathematics.h" -#include "libavutil/opt.h" -#include "libavutil/pixdesc.h" - -#include "av1.h" -#include "avcodec.h" -#include "bsf.h" -#include "codec_internal.h" -#include "encode.h" -#include "internal.h" -#include "libaom.h" -#include "packet_internal.h" -#include "profiles.h" - -/* - * Portion of struct aom_codec_cx_pkt from aom_encoder.h. - * One encoded frame returned from the library. - */ -struct FrameListData { - void *buf; /**< compressed data buffer */ - size_t sz; /**< length of compressed data */ - int64_t pts; /**< time stamp to show frame - (in timebase units) */ - unsigned long duration; /**< duration to show frame - (in timebase units) */ - uint32_t flags; /**< flags for this frame */ - uint64_t sse[4]; - int have_sse; /**< true if we have pending sse[] */ - uint64_t frame_number; - struct FrameListData *next; -}; - -typedef struct AOMEncoderContext { - AVClass *class; - AVBSFContext *bsf; - struct aom_codec_ctx encoder; - struct aom_image rawimg; - struct aom_fixed_buf twopass_stats; - unsigned twopass_stats_size; - struct FrameListData *coded_frame_list; - int cpu_used; - int auto_alt_ref; - int arnr_max_frames; - int arnr_strength; - int aq_mode; - int lag_in_frames; - int error_resilient; - int crf; - int static_thresh; - int drop_threshold; - int denoise_noise_level; - int denoise_block_size; - uint64_t sse[4]; - int have_sse; /**< true if we have pending sse[] */ - uint64_t frame_number; - int rc_undershoot_pct; - int rc_overshoot_pct; - int minsection_pct; - int maxsection_pct; - int frame_parallel; - int tile_cols, tile_rows; - int tile_cols_log2, tile_rows_log2; - aom_superblock_size_t superblock_size; - int uniform_tiles; - int row_mt; - int enable_cdef; - int enable_global_motion; - int enable_intrabc; - int enable_restoration; - int usage; - int tune; - int still_picture; - int enable_rect_partitions; - int enable_1to4_partitions; - int enable_ab_partitions; - int enable_angle_delta; - int enable_cfl_intra; - int enable_paeth_intra; - int enable_smooth_intra; - int enable_intra_edge_filter; - int enable_palette; - int enable_filter_intra; - int enable_flip_idtx; - int enable_tx64; - int reduced_tx_type_set; - int use_intra_dct_only; - int use_inter_dct_only; - int use_intra_default_tx_only; - int enable_ref_frame_mvs; - int enable_interinter_wedge; - int enable_interintra_wedge; - int enable_interintra_comp; - int enable_masked_comp; - int enable_obmc; - int enable_onesided_comp; - int enable_reduced_reference_set; - int enable_smooth_interintra; - int enable_diff_wtd_comp; - int enable_dist_wtd_comp; - int enable_dual_filter; - AVDictionary *aom_params; -} AOMContext; - -static const char *const ctlidstr[] = { - [AOME_SET_CPUUSED] = "AOME_SET_CPUUSED", - [AOME_SET_CQ_LEVEL] = "AOME_SET_CQ_LEVEL", - [AOME_SET_ENABLEAUTOALTREF] = "AOME_SET_ENABLEAUTOALTREF", - [AOME_SET_ARNR_MAXFRAMES] = "AOME_SET_ARNR_MAXFRAMES", - [AOME_SET_ARNR_STRENGTH] = "AOME_SET_ARNR_STRENGTH", - [AOME_SET_STATIC_THRESHOLD] = "AOME_SET_STATIC_THRESHOLD", - [AV1E_SET_COLOR_RANGE] = "AV1E_SET_COLOR_RANGE", - [AV1E_SET_COLOR_PRIMARIES] = "AV1E_SET_COLOR_PRIMARIES", - [AV1E_SET_MATRIX_COEFFICIENTS] = "AV1E_SET_MATRIX_COEFFICIENTS", - [AV1E_SET_TRANSFER_CHARACTERISTICS] = "AV1E_SET_TRANSFER_CHARACTERISTICS", - [AV1E_SET_AQ_MODE] = "AV1E_SET_AQ_MODE", - [AV1E_SET_FRAME_PARALLEL_DECODING] = "AV1E_SET_FRAME_PARALLEL_DECODING", - [AV1E_SET_SUPERBLOCK_SIZE] = "AV1E_SET_SUPERBLOCK_SIZE", - [AV1E_SET_TILE_COLUMNS] = "AV1E_SET_TILE_COLUMNS", - [AV1E_SET_TILE_ROWS] = "AV1E_SET_TILE_ROWS", - [AV1E_SET_ENABLE_RESTORATION] = "AV1E_SET_ENABLE_RESTORATION", -#ifdef AOM_CTRL_AV1E_SET_ROW_MT - [AV1E_SET_ROW_MT] = "AV1E_SET_ROW_MT", -#endif -#ifdef AOM_CTRL_AV1E_SET_DENOISE_NOISE_LEVEL - [AV1E_SET_DENOISE_NOISE_LEVEL] = "AV1E_SET_DENOISE_NOISE_LEVEL", -#endif -#ifdef AOM_CTRL_AV1E_SET_DENOISE_BLOCK_SIZE - [AV1E_SET_DENOISE_BLOCK_SIZE] = "AV1E_SET_DENOISE_BLOCK_SIZE", -#endif -#ifdef AOM_CTRL_AV1E_SET_MAX_REFERENCE_FRAMES - [AV1E_SET_MAX_REFERENCE_FRAMES] = "AV1E_SET_MAX_REFERENCE_FRAMES", -#endif -#ifdef AOM_CTRL_AV1E_SET_ENABLE_GLOBAL_MOTION - [AV1E_SET_ENABLE_GLOBAL_MOTION] = "AV1E_SET_ENABLE_GLOBAL_MOTION", -#endif -#ifdef AOM_CTRL_AV1E_SET_ENABLE_INTRABC - [AV1E_SET_ENABLE_INTRABC] = "AV1E_SET_ENABLE_INTRABC", -#endif - [AV1E_SET_ENABLE_CDEF] = "AV1E_SET_ENABLE_CDEF", - [AOME_SET_TUNING] = "AOME_SET_TUNING", -#if AOM_ENCODER_ABI_VERSION >= 22 - [AV1E_SET_ENABLE_1TO4_PARTITIONS] = "AV1E_SET_ENABLE_1TO4_PARTITIONS", - [AV1E_SET_ENABLE_AB_PARTITIONS] = "AV1E_SET_ENABLE_AB_PARTITIONS", - [AV1E_SET_ENABLE_RECT_PARTITIONS] = "AV1E_SET_ENABLE_RECT_PARTITIONS", - [AV1E_SET_ENABLE_ANGLE_DELTA] = "AV1E_SET_ENABLE_ANGLE_DELTA", - [AV1E_SET_ENABLE_CFL_INTRA] = "AV1E_SET_ENABLE_CFL_INTRA", - [AV1E_SET_ENABLE_FILTER_INTRA] = "AV1E_SET_ENABLE_FILTER_INTRA", - [AV1E_SET_ENABLE_INTRA_EDGE_FILTER] = "AV1E_SET_ENABLE_INTRA_EDGE_FILTER", - [AV1E_SET_ENABLE_PAETH_INTRA] = "AV1E_SET_ENABLE_PAETH_INTRA", - [AV1E_SET_ENABLE_SMOOTH_INTRA] = "AV1E_SET_ENABLE_SMOOTH_INTRA", - [AV1E_SET_ENABLE_PALETTE] = "AV1E_SET_ENABLE_PALETTE", - [AV1E_SET_ENABLE_FLIP_IDTX] = "AV1E_SET_ENABLE_FLIP_IDTX", - [AV1E_SET_ENABLE_TX64] = "AV1E_SET_ENABLE_TX64", - [AV1E_SET_INTRA_DCT_ONLY] = "AV1E_SET_INTRA_DCT_ONLY", - [AV1E_SET_INTER_DCT_ONLY] = "AV1E_SET_INTER_DCT_ONLY", - [AV1E_SET_INTRA_DEFAULT_TX_ONLY] = "AV1E_SET_INTRA_DEFAULT_TX_ONLY", - [AV1E_SET_REDUCED_TX_TYPE_SET] = "AV1E_SET_REDUCED_TX_TYPE_SET", - [AV1E_SET_ENABLE_DIFF_WTD_COMP] = "AV1E_SET_ENABLE_DIFF_WTD_COMP", - [AV1E_SET_ENABLE_DIST_WTD_COMP] = "AV1E_SET_ENABLE_DIST_WTD_COMP", - [AV1E_SET_ENABLE_DUAL_FILTER] = "AV1E_SET_ENABLE_DUAL_FILTER", - [AV1E_SET_ENABLE_INTERINTER_WEDGE] = "AV1E_SET_ENABLE_INTERINTER_WEDGE", - [AV1E_SET_ENABLE_INTERINTRA_WEDGE] = "AV1E_SET_ENABLE_INTERINTRA_WEDGE", - [AV1E_SET_ENABLE_MASKED_COMP] = "AV1E_SET_ENABLE_MASKED_COMP", - [AV1E_SET_ENABLE_INTERINTRA_COMP] = "AV1E_SET_ENABLE_INTERINTRA_COMP", - [AV1E_SET_ENABLE_OBMC] = "AV1E_SET_ENABLE_OBMC", - [AV1E_SET_ENABLE_ONESIDED_COMP] = "AV1E_SET_ENABLE_ONESIDED_COMP", - [AV1E_SET_REDUCED_REFERENCE_SET] = "AV1E_SET_REDUCED_REFERENCE_SET", - [AV1E_SET_ENABLE_SMOOTH_INTERINTRA] = "AV1E_SET_ENABLE_SMOOTH_INTERINTRA", - [AV1E_SET_ENABLE_REF_FRAME_MVS] = "AV1E_SET_ENABLE_REF_FRAME_MVS", -#endif -#ifdef AOM_CTRL_AV1E_GET_NUM_OPERATING_POINTS - [AV1E_GET_NUM_OPERATING_POINTS] = "AV1E_GET_NUM_OPERATING_POINTS", -#endif -#ifdef AOM_CTRL_AV1E_GET_SEQ_LEVEL_IDX - [AV1E_GET_SEQ_LEVEL_IDX] = "AV1E_GET_SEQ_LEVEL_IDX", -#endif -#ifdef AOM_CTRL_AV1E_GET_TARGET_SEQ_LEVEL_IDX - [AV1E_GET_TARGET_SEQ_LEVEL_IDX] = "AV1E_GET_TARGET_SEQ_LEVEL_IDX", -#endif - [AV1_GET_NEW_FRAME_IMAGE] = "AV1_GET_NEW_FRAME_IMAGE", -}; - -static av_cold void log_encoder_error(AVCodecContext *avctx, const char *desc) -{ - AOMContext *ctx = avctx->priv_data; - const char *error = aom_codec_error(&ctx->encoder); - const char *detail = aom_codec_error_detail(&ctx->encoder); - - av_log(avctx, AV_LOG_ERROR, "%s: %s\n", desc, error); - if (detail) - av_log(avctx, AV_LOG_ERROR, " Additional information: %s\n", detail); -} - -static av_cold void dump_enc_cfg(AVCodecContext *avctx, - const struct aom_codec_enc_cfg *cfg, - int level) -{ - int width = -30; - - av_log(avctx, level, "aom_codec_enc_cfg\n"); - av_log(avctx, level, "generic settings\n" - " %*s%u\n %*s%u\n %*s%u\n %*s%u\n %*s%u\n" - " %*s%u\n %*s%u\n" - " %*s{%u/%u}\n %*s%u\n %*s%d\n %*s%u\n", - width, "g_usage:", cfg->g_usage, - width, "g_threads:", cfg->g_threads, - width, "g_profile:", cfg->g_profile, - width, "g_w:", cfg->g_w, - width, "g_h:", cfg->g_h, - width, "g_bit_depth:", cfg->g_bit_depth, - width, "g_input_bit_depth:", cfg->g_input_bit_depth, - width, "g_timebase:", cfg->g_timebase.num, cfg->g_timebase.den, - width, "g_error_resilient:", cfg->g_error_resilient, - width, "g_pass:", cfg->g_pass, - width, "g_lag_in_frames:", cfg->g_lag_in_frames); - av_log(avctx, level, "rate control settings\n" - " %*s%u\n %*s%d\n %*s%p(%"SIZE_SPECIFIER")\n %*s%u\n", - width, "rc_dropframe_thresh:", cfg->rc_dropframe_thresh, - width, "rc_end_usage:", cfg->rc_end_usage, - width, "rc_twopass_stats_in:", cfg->rc_twopass_stats_in.buf, cfg->rc_twopass_stats_in.sz, - width, "rc_target_bitrate:", cfg->rc_target_bitrate); - av_log(avctx, level, "quantizer settings\n" - " %*s%u\n %*s%u\n", - width, "rc_min_quantizer:", cfg->rc_min_quantizer, - width, "rc_max_quantizer:", cfg->rc_max_quantizer); - av_log(avctx, level, "bitrate tolerance\n" - " %*s%u\n %*s%u\n", - width, "rc_undershoot_pct:", cfg->rc_undershoot_pct, - width, "rc_overshoot_pct:", cfg->rc_overshoot_pct); - av_log(avctx, level, "decoder buffer model\n" - " %*s%u\n %*s%u\n %*s%u\n", - width, "rc_buf_sz:", cfg->rc_buf_sz, - width, "rc_buf_initial_sz:", cfg->rc_buf_initial_sz, - width, "rc_buf_optimal_sz:", cfg->rc_buf_optimal_sz); - av_log(avctx, level, "2 pass rate control settings\n" - " %*s%u\n %*s%u\n %*s%u\n", - width, "rc_2pass_vbr_bias_pct:", cfg->rc_2pass_vbr_bias_pct, - width, "rc_2pass_vbr_minsection_pct:", cfg->rc_2pass_vbr_minsection_pct, - width, "rc_2pass_vbr_maxsection_pct:", cfg->rc_2pass_vbr_maxsection_pct); - av_log(avctx, level, "keyframing settings\n" - " %*s%d\n %*s%u\n %*s%u\n", - width, "kf_mode:", cfg->kf_mode, - width, "kf_min_dist:", cfg->kf_min_dist, - width, "kf_max_dist:", cfg->kf_max_dist); - av_log(avctx, level, "tile settings\n" - " %*s%d\n %*s%d\n", - width, "tile_width_count:", cfg->tile_width_count, - width, "tile_height_count:", cfg->tile_height_count); - av_log(avctx, level, "\n"); -} - -static void coded_frame_add(void *list, struct FrameListData *cx_frame) -{ - struct FrameListData **p = list; - - while (*p) - p = &(*p)->next; - *p = cx_frame; - cx_frame->next = NULL; -} - -static av_cold void free_coded_frame(struct FrameListData *cx_frame) -{ - av_freep(&cx_frame->buf); - av_freep(&cx_frame); -} - -static av_cold void free_frame_list(struct FrameListData *list) -{ - struct FrameListData *p = list; - - while (p) { - list = list->next; - free_coded_frame(p); - p = list; - } -} - -static av_cold int codecctl_int(AVCodecContext *avctx, -#ifdef UENUM1BYTE - aome_enc_control_id id, -#else - enum aome_enc_control_id id, -#endif - int val) -{ - AOMContext *ctx = avctx->priv_data; - char buf[80]; - int width = -30; - int res; - - snprintf(buf, sizeof(buf), "%s:", ctlidstr[id]); - av_log(avctx, AV_LOG_DEBUG, " %*s%d\n", width, buf, val); - - res = aom_codec_control(&ctx->encoder, id, val); - if (res != AOM_CODEC_OK) { - snprintf(buf, sizeof(buf), "Failed to set %s codec control", - ctlidstr[id]); - log_encoder_error(avctx, buf); - return AVERROR(EINVAL); - } - - return 0; -} - -#if defined(AOM_CTRL_AV1E_GET_NUM_OPERATING_POINTS) && \ - defined(AOM_CTRL_AV1E_GET_SEQ_LEVEL_IDX) && \ - defined(AOM_CTRL_AV1E_GET_TARGET_SEQ_LEVEL_IDX) -static av_cold int codecctl_intp(AVCodecContext *avctx, -#ifdef UENUM1BYTE - aome_enc_control_id id, -#else - enum aome_enc_control_id id, -#endif - int* ptr) -{ - AOMContext *ctx = avctx->priv_data; - char buf[80]; - int width = -30; - int res; - - snprintf(buf, sizeof(buf), "%s:", ctlidstr[id]); - av_log(avctx, AV_LOG_DEBUG, " %*s%d\n", width, buf, *ptr); - - res = aom_codec_control(&ctx->encoder, id, ptr); - if (res != AOM_CODEC_OK) { - snprintf(buf, sizeof(buf), "Failed to set %s codec control", - ctlidstr[id]); - log_encoder_error(avctx, buf); - return AVERROR(EINVAL); - } - - return 0; -} -#endif - -static av_cold int codecctl_imgp(AVCodecContext *avctx, -#ifdef UENUM1BYTE - aome_enc_control_id id, -#else - enum aome_enc_control_id id, -#endif - struct aom_image *img) -{ - AOMContext *ctx = avctx->priv_data; - char buf[80]; - int res; - - snprintf(buf, sizeof(buf), "%s:", ctlidstr[id]); - - res = aom_codec_control(&ctx->encoder, id, img); - if (res != AOM_CODEC_OK) { - snprintf(buf, sizeof(buf), "Failed to get %s codec control", - ctlidstr[id]); - log_encoder_error(avctx, buf); - return AVERROR(EINVAL); - } - - return 0; -} - -static av_cold int aom_free(AVCodecContext *avctx) -{ - AOMContext *ctx = avctx->priv_data; - -#if defined(AOM_CTRL_AV1E_GET_NUM_OPERATING_POINTS) && \ - defined(AOM_CTRL_AV1E_GET_SEQ_LEVEL_IDX) && \ - defined(AOM_CTRL_AV1E_GET_TARGET_SEQ_LEVEL_IDX) - if (ctx->encoder.iface && !(avctx->flags & AV_CODEC_FLAG_PASS1)) { - int num_operating_points; - int levels[32]; - int target_levels[32]; - - if (!codecctl_intp(avctx, AV1E_GET_NUM_OPERATING_POINTS, - &num_operating_points) && - !codecctl_intp(avctx, AV1E_GET_SEQ_LEVEL_IDX, levels) && - !codecctl_intp(avctx, AV1E_GET_TARGET_SEQ_LEVEL_IDX, - target_levels)) { - for (int i = 0; i < num_operating_points; i++) { - if (levels[i] > target_levels[i]) { - // Warn when the target level was not met - av_log(avctx, AV_LOG_WARNING, - "Could not encode to target level %d.%d for " - "operating point %d. The output level is %d.%d.\n", - 2 + (target_levels[i] >> 2), target_levels[i] & 3, - i, 2 + (levels[i] >> 2), levels[i] & 3); - } else if (target_levels[i] < 31) { - // Log the encoded level if a target level was given - av_log(avctx, AV_LOG_INFO, - "Output level for operating point %d is %d.%d.\n", - i, 2 + (levels[i] >> 2), levels[i] & 3); - } - } - } - } -#endif - - aom_codec_destroy(&ctx->encoder); - av_freep(&ctx->twopass_stats.buf); - av_freep(&avctx->stats_out); - free_frame_list(ctx->coded_frame_list); - av_bsf_free(&ctx->bsf); - return 0; -} - -static int set_pix_fmt(AVCodecContext *avctx, aom_codec_caps_t codec_caps, - struct aom_codec_enc_cfg *enccfg, aom_codec_flags_t *flags, - aom_img_fmt_t *img_fmt) -{ - AOMContext av_unused *ctx = avctx->priv_data; - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt); - enccfg->g_bit_depth = enccfg->g_input_bit_depth = desc->comp[0].depth; - switch (avctx->pix_fmt) { - case AV_PIX_FMT_GRAY8: - enccfg->monochrome = 1; - /* Fall-through */ - case AV_PIX_FMT_YUV420P: - enccfg->g_profile = FF_PROFILE_AV1_MAIN; - *img_fmt = AOM_IMG_FMT_I420; - return 0; - case AV_PIX_FMT_YUV422P: - enccfg->g_profile = FF_PROFILE_AV1_PROFESSIONAL; - *img_fmt = AOM_IMG_FMT_I422; - return 0; - case AV_PIX_FMT_YUV444P: - case AV_PIX_FMT_GBRP: - enccfg->g_profile = FF_PROFILE_AV1_HIGH; - *img_fmt = AOM_IMG_FMT_I444; - return 0; - case AV_PIX_FMT_GRAY10: - case AV_PIX_FMT_GRAY12: - enccfg->monochrome = 1; - /* Fall-through */ - case AV_PIX_FMT_YUV420P10: - case AV_PIX_FMT_YUV420P12: - if (codec_caps & AOM_CODEC_CAP_HIGHBITDEPTH) { - enccfg->g_profile = - enccfg->g_bit_depth == 10 ? FF_PROFILE_AV1_MAIN : FF_PROFILE_AV1_PROFESSIONAL; - *img_fmt = AOM_IMG_FMT_I42016; - *flags |= AOM_CODEC_USE_HIGHBITDEPTH; - return 0; - } - break; - case AV_PIX_FMT_YUV422P10: - case AV_PIX_FMT_YUV422P12: - if (codec_caps & AOM_CODEC_CAP_HIGHBITDEPTH) { - enccfg->g_profile = FF_PROFILE_AV1_PROFESSIONAL; - *img_fmt = AOM_IMG_FMT_I42216; - *flags |= AOM_CODEC_USE_HIGHBITDEPTH; - return 0; - } - break; - case AV_PIX_FMT_YUV444P10: - case AV_PIX_FMT_YUV444P12: - case AV_PIX_FMT_GBRP10: - case AV_PIX_FMT_GBRP12: - if (codec_caps & AOM_CODEC_CAP_HIGHBITDEPTH) { - enccfg->g_profile = - enccfg->g_bit_depth == 10 ? FF_PROFILE_AV1_HIGH : FF_PROFILE_AV1_PROFESSIONAL; - *img_fmt = AOM_IMG_FMT_I44416; - *flags |= AOM_CODEC_USE_HIGHBITDEPTH; - return 0; - } - break; - default: - break; - } - av_log(avctx, AV_LOG_ERROR, "Unsupported pixel format.\n"); - return AVERROR_INVALIDDATA; -} - -static void set_color_range(AVCodecContext *avctx) -{ - aom_color_range_t aom_cr; - switch (avctx->color_range) { - case AVCOL_RANGE_UNSPECIFIED: - case AVCOL_RANGE_MPEG: aom_cr = AOM_CR_STUDIO_RANGE; break; - case AVCOL_RANGE_JPEG: aom_cr = AOM_CR_FULL_RANGE; break; - default: - av_log(avctx, AV_LOG_WARNING, "Unsupported color range (%d)\n", - avctx->color_range); - return; - } - - codecctl_int(avctx, AV1E_SET_COLOR_RANGE, aom_cr); -} - -static int count_uniform_tiling(int dim, int sb_size, int tiles_log2) -{ - int sb_dim = (dim + sb_size - 1) / sb_size; - int tile_dim = (sb_dim + (1 << tiles_log2) - 1) >> tiles_log2; - av_assert0(tile_dim > 0); - return (sb_dim + tile_dim - 1) / tile_dim; -} - -static int choose_tiling(AVCodecContext *avctx, - struct aom_codec_enc_cfg *enccfg) -{ - AOMContext *ctx = avctx->priv_data; - int sb_128x128_possible, sb_size, sb_width, sb_height; - int uniform_rows, uniform_cols; - int uniform_64x64_possible, uniform_128x128_possible; - int tile_size, rounding, i; - - if (ctx->tile_cols_log2 >= 0) - ctx->tile_cols = 1 << ctx->tile_cols_log2; - if (ctx->tile_rows_log2 >= 0) - ctx->tile_rows = 1 << ctx->tile_rows_log2; - - if (ctx->tile_cols == 0) { - ctx->tile_cols = (avctx->width + AV1_MAX_TILE_WIDTH - 1) / - AV1_MAX_TILE_WIDTH; - if (ctx->tile_cols > 1) { - av_log(avctx, AV_LOG_DEBUG, "Automatically using %d tile " - "columns to fill width.\n", ctx->tile_cols); - } - } - av_assert0(ctx->tile_cols > 0); - if (ctx->tile_rows == 0) { - int max_tile_width = - FFALIGN((FFALIGN(avctx->width, 128) + - ctx->tile_cols - 1) / ctx->tile_cols, 128); - ctx->tile_rows = - (max_tile_width * FFALIGN(avctx->height, 128) + - AV1_MAX_TILE_AREA - 1) / AV1_MAX_TILE_AREA; - if (ctx->tile_rows > 1) { - av_log(avctx, AV_LOG_DEBUG, "Automatically using %d tile " - "rows to fill area.\n", ctx->tile_rows); - } - } - av_assert0(ctx->tile_rows > 0); - - if ((avctx->width + 63) / 64 < ctx->tile_cols || - (avctx->height + 63) / 64 < ctx->tile_rows) { - av_log(avctx, AV_LOG_ERROR, "Invalid tile sizing: frame not " - "large enough to fit specified tile arrangement.\n"); - return AVERROR(EINVAL); - } - if (ctx->tile_cols > AV1_MAX_TILE_COLS || - ctx->tile_rows > AV1_MAX_TILE_ROWS) { - av_log(avctx, AV_LOG_ERROR, "Invalid tile sizing: AV1 does " - "not allow more than %dx%d tiles.\n", - AV1_MAX_TILE_COLS, AV1_MAX_TILE_ROWS); - return AVERROR(EINVAL); - } - if (avctx->width / ctx->tile_cols > AV1_MAX_TILE_WIDTH) { - av_log(avctx, AV_LOG_ERROR, "Invalid tile sizing: AV1 does " - "not allow tiles of width greater than %d.\n", - AV1_MAX_TILE_WIDTH); - return AVERROR(EINVAL); - } - - ctx->superblock_size = AOM_SUPERBLOCK_SIZE_DYNAMIC; - - if (ctx->tile_cols == 1 && ctx->tile_rows == 1) { - av_log(avctx, AV_LOG_DEBUG, "Using a single tile.\n"); - return 0; - } - - sb_128x128_possible = - (avctx->width + 127) / 128 >= ctx->tile_cols && - (avctx->height + 127) / 128 >= ctx->tile_rows; - - ctx->tile_cols_log2 = ctx->tile_cols == 1 ? 0 : - av_log2(ctx->tile_cols - 1) + 1; - ctx->tile_rows_log2 = ctx->tile_rows == 1 ? 0 : - av_log2(ctx->tile_rows - 1) + 1; - - uniform_cols = count_uniform_tiling(avctx->width, - 64, ctx->tile_cols_log2); - uniform_rows = count_uniform_tiling(avctx->height, - 64, ctx->tile_rows_log2); - av_log(avctx, AV_LOG_DEBUG, "Uniform with 64x64 superblocks " - "-> %dx%d tiles.\n", uniform_cols, uniform_rows); - uniform_64x64_possible = uniform_cols == ctx->tile_cols && - uniform_rows == ctx->tile_rows; - - if (sb_128x128_possible) { - uniform_cols = count_uniform_tiling(avctx->width, - 128, ctx->tile_cols_log2); - uniform_rows = count_uniform_tiling(avctx->height, - 128, ctx->tile_rows_log2); - av_log(avctx, AV_LOG_DEBUG, "Uniform with 128x128 superblocks " - "-> %dx%d tiles.\n", uniform_cols, uniform_rows); - uniform_128x128_possible = uniform_cols == ctx->tile_cols && - uniform_rows == ctx->tile_rows; - } else { - av_log(avctx, AV_LOG_DEBUG, "128x128 superblocks not possible.\n"); - uniform_128x128_possible = 0; - } - - ctx->uniform_tiles = 1; - if (uniform_64x64_possible && uniform_128x128_possible) { - av_log(avctx, AV_LOG_DEBUG, "Using uniform tiling with dynamic " - "superblocks (tile_cols_log2 = %d, tile_rows_log2 = %d).\n", - ctx->tile_cols_log2, ctx->tile_rows_log2); - return 0; - } - if (uniform_64x64_possible && !sb_128x128_possible) { - av_log(avctx, AV_LOG_DEBUG, "Using uniform tiling with 64x64 " - "superblocks (tile_cols_log2 = %d, tile_rows_log2 = %d).\n", - ctx->tile_cols_log2, ctx->tile_rows_log2); - ctx->superblock_size = AOM_SUPERBLOCK_SIZE_64X64; - return 0; - } - if (uniform_128x128_possible) { - av_log(avctx, AV_LOG_DEBUG, "Using uniform tiling with 128x128 " - "superblocks (tile_cols_log2 = %d, tile_rows_log2 = %d).\n", - ctx->tile_cols_log2, ctx->tile_rows_log2); - ctx->superblock_size = AOM_SUPERBLOCK_SIZE_128X128; - return 0; - } - ctx->uniform_tiles = 0; - - if (sb_128x128_possible) { - sb_size = 128; - ctx->superblock_size = AOM_SUPERBLOCK_SIZE_128X128; - } else { - sb_size = 64; - ctx->superblock_size = AOM_SUPERBLOCK_SIZE_64X64; - } - av_log(avctx, AV_LOG_DEBUG, "Using fixed tiling with %dx%d " - "superblocks (tile_cols = %d, tile_rows = %d).\n", - sb_size, sb_size, ctx->tile_cols, ctx->tile_rows); - - enccfg->tile_width_count = ctx->tile_cols; - enccfg->tile_height_count = ctx->tile_rows; - - sb_width = (avctx->width + sb_size - 1) / sb_size; - sb_height = (avctx->height + sb_size - 1) / sb_size; - - tile_size = sb_width / ctx->tile_cols; - rounding = sb_width % ctx->tile_cols; - for (i = 0; i < ctx->tile_cols; i++) { - enccfg->tile_widths[i] = tile_size + - (i < rounding / 2 || - i > ctx->tile_cols - 1 - (rounding + 1) / 2); - } - - tile_size = sb_height / ctx->tile_rows; - rounding = sb_height % ctx->tile_rows; - for (i = 0; i < ctx->tile_rows; i++) { - enccfg->tile_heights[i] = tile_size + - (i < rounding / 2 || - i > ctx->tile_rows - 1 - (rounding + 1) / 2); - } - - return 0; -} - -static av_cold int aom_init(AVCodecContext *avctx, - const struct aom_codec_iface *iface) -{ - AOMContext *ctx = avctx->priv_data; - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt); - struct aom_codec_enc_cfg enccfg = { 0 }; -#ifdef AOM_FRAME_IS_INTRAONLY - aom_codec_flags_t flags = - (avctx->flags & AV_CODEC_FLAG_PSNR) ? AOM_CODEC_USE_PSNR : 0; -#else - aom_codec_flags_t flags = 0; -#endif - AVCPBProperties *cpb_props; - int res; - aom_img_fmt_t img_fmt; - aom_codec_caps_t codec_caps = aom_codec_get_caps(iface); - - av_log(avctx, AV_LOG_INFO, "%s\n", aom_codec_version_str()); - av_log(avctx, AV_LOG_VERBOSE, "%s\n", aom_codec_build_config()); - - if ((res = aom_codec_enc_config_default(iface, &enccfg, ctx->usage)) != AOM_CODEC_OK) { - av_log(avctx, AV_LOG_ERROR, "Failed to get config: %s\n", - aom_codec_err_to_string(res)); - return AVERROR(EINVAL); - } - - if (set_pix_fmt(avctx, codec_caps, &enccfg, &flags, &img_fmt)) - return AVERROR(EINVAL); - - if(!avctx->bit_rate) - if(avctx->rc_max_rate || avctx->rc_buffer_size || avctx->rc_initial_buffer_occupancy) { - av_log( avctx, AV_LOG_ERROR, "Rate control parameters set without a bitrate\n"); - return AVERROR(EINVAL); - } - - dump_enc_cfg(avctx, &enccfg, AV_LOG_DEBUG); - - enccfg.g_w = avctx->width; - enccfg.g_h = avctx->height; - enccfg.g_timebase.num = avctx->time_base.num; - enccfg.g_timebase.den = avctx->time_base.den; - enccfg.g_threads = - FFMIN(avctx->thread_count ? avctx->thread_count : av_cpu_count(), 64); - - if (ctx->lag_in_frames >= 0) - enccfg.g_lag_in_frames = ctx->lag_in_frames; - - if (avctx->flags & AV_CODEC_FLAG_PASS1) - enccfg.g_pass = AOM_RC_FIRST_PASS; - else if (avctx->flags & AV_CODEC_FLAG_PASS2) - enccfg.g_pass = AOM_RC_LAST_PASS; - else - enccfg.g_pass = AOM_RC_ONE_PASS; - - if (avctx->rc_min_rate == avctx->rc_max_rate && - avctx->rc_min_rate == avctx->bit_rate && avctx->bit_rate) { - enccfg.rc_end_usage = AOM_CBR; - } else if (ctx->crf >= 0) { - enccfg.rc_end_usage = AOM_CQ; - if (!avctx->bit_rate) - enccfg.rc_end_usage = AOM_Q; - } - - if (avctx->bit_rate) { - enccfg.rc_target_bitrate = av_rescale_rnd(avctx->bit_rate, 1, 1000, - AV_ROUND_NEAR_INF); - } else if (enccfg.rc_end_usage != AOM_Q) { - enccfg.rc_end_usage = AOM_Q; - ctx->crf = 32; - av_log(avctx, AV_LOG_WARNING, - "Neither bitrate nor constrained quality specified, using default CRF of %d\n", - ctx->crf); - } - - if (avctx->qmin >= 0) - enccfg.rc_min_quantizer = avctx->qmin; - if (avctx->qmax >= 0) { - enccfg.rc_max_quantizer = avctx->qmax; - } else if (!ctx->crf) { - enccfg.rc_max_quantizer = 0; - } - - if (enccfg.rc_end_usage == AOM_CQ || enccfg.rc_end_usage == AOM_Q) { - if (ctx->crf < enccfg.rc_min_quantizer || ctx->crf > enccfg.rc_max_quantizer) { - av_log(avctx, AV_LOG_ERROR, - "CQ level %d must be between minimum and maximum quantizer value (%d-%d)\n", - ctx->crf, enccfg.rc_min_quantizer, enccfg.rc_max_quantizer); - return AVERROR(EINVAL); - } - } - - enccfg.rc_dropframe_thresh = ctx->drop_threshold; - - // 0-100 (0 => CBR, 100 => VBR) - enccfg.rc_2pass_vbr_bias_pct = round(avctx->qcompress * 100); - if (ctx->minsection_pct >= 0) - enccfg.rc_2pass_vbr_minsection_pct = ctx->minsection_pct; - else if (avctx->bit_rate) - enccfg.rc_2pass_vbr_minsection_pct = - avctx->rc_min_rate * 100LL / avctx->bit_rate; - if (ctx->maxsection_pct >= 0) - enccfg.rc_2pass_vbr_maxsection_pct = ctx->maxsection_pct; - else if (avctx->rc_max_rate) - enccfg.rc_2pass_vbr_maxsection_pct = - avctx->rc_max_rate * 100LL / avctx->bit_rate; - - if (avctx->rc_buffer_size) - enccfg.rc_buf_sz = - avctx->rc_buffer_size * 1000LL / avctx->bit_rate; - if (avctx->rc_initial_buffer_occupancy) - enccfg.rc_buf_initial_sz = - avctx->rc_initial_buffer_occupancy * 1000LL / avctx->bit_rate; - enccfg.rc_buf_optimal_sz = enccfg.rc_buf_sz * 5 / 6; - - if (ctx->rc_undershoot_pct >= 0) - enccfg.rc_undershoot_pct = ctx->rc_undershoot_pct; - if (ctx->rc_overshoot_pct >= 0) - enccfg.rc_overshoot_pct = ctx->rc_overshoot_pct; - - // _enc_init() will balk if kf_min_dist differs from max w/AOM_KF_AUTO - if (avctx->keyint_min >= 0 && avctx->keyint_min == avctx->gop_size) - enccfg.kf_min_dist = avctx->keyint_min; - if (avctx->gop_size >= 0) - enccfg.kf_max_dist = avctx->gop_size; - - if (enccfg.g_pass == AOM_RC_FIRST_PASS) - enccfg.g_lag_in_frames = 0; - else if (enccfg.g_pass == AOM_RC_LAST_PASS) { - int decode_size, ret; - - if (!avctx->stats_in) { - av_log(avctx, AV_LOG_ERROR, "No stats file for second pass\n"); - return AVERROR_INVALIDDATA; - } - - ctx->twopass_stats.sz = strlen(avctx->stats_in) * 3 / 4; - ret = av_reallocp(&ctx->twopass_stats.buf, ctx->twopass_stats.sz); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, - "Stat buffer alloc (%"SIZE_SPECIFIER" bytes) failed\n", - ctx->twopass_stats.sz); - ctx->twopass_stats.sz = 0; - return ret; - } - decode_size = av_base64_decode(ctx->twopass_stats.buf, avctx->stats_in, - ctx->twopass_stats.sz); - if (decode_size < 0) { - av_log(avctx, AV_LOG_ERROR, "Stat buffer decode failed\n"); - return AVERROR_INVALIDDATA; - } - - ctx->twopass_stats.sz = decode_size; - enccfg.rc_twopass_stats_in = ctx->twopass_stats; - } - - /* 0-3: For non-zero values the encoder increasingly optimizes for reduced - * complexity playback on low powered devices at the expense of encode - * quality. */ - if (avctx->profile != FF_PROFILE_UNKNOWN) - enccfg.g_profile = avctx->profile; - - enccfg.g_error_resilient = ctx->error_resilient; - - res = choose_tiling(avctx, &enccfg); - if (res < 0) - return res; - - if (ctx->still_picture) { - // Set the maximum number of frames to 1. This will let libaom set - // still_picture and reduced_still_picture_header to 1 in the Sequence - // Header as required by AVIF still images. - enccfg.g_limit = 1; - // Reduce memory usage for still images. - enccfg.g_lag_in_frames = 0; - // All frames will be key frames. - enccfg.kf_max_dist = 0; - enccfg.kf_mode = AOM_KF_DISABLED; - } - - /* Construct Encoder Context */ - res = aom_codec_enc_init(&ctx->encoder, iface, &enccfg, flags); - if (res != AOM_CODEC_OK) { - dump_enc_cfg(avctx, &enccfg, AV_LOG_WARNING); - log_encoder_error(avctx, "Failed to initialize encoder"); - return AVERROR(EINVAL); - } - dump_enc_cfg(avctx, &enccfg, AV_LOG_DEBUG); - - // codec control failures are currently treated only as warnings - av_log(avctx, AV_LOG_DEBUG, "aom_codec_control\n"); - codecctl_int(avctx, AOME_SET_CPUUSED, ctx->cpu_used); - if (ctx->auto_alt_ref >= 0) - codecctl_int(avctx, AOME_SET_ENABLEAUTOALTREF, ctx->auto_alt_ref); - if (ctx->arnr_max_frames >= 0) - codecctl_int(avctx, AOME_SET_ARNR_MAXFRAMES, ctx->arnr_max_frames); - if (ctx->arnr_strength >= 0) - codecctl_int(avctx, AOME_SET_ARNR_STRENGTH, ctx->arnr_strength); - if (ctx->enable_cdef >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_CDEF, ctx->enable_cdef); - if (ctx->enable_restoration >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_RESTORATION, ctx->enable_restoration); -#if AOM_ENCODER_ABI_VERSION >= 22 - if (ctx->enable_rect_partitions >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_RECT_PARTITIONS, ctx->enable_rect_partitions); - if (ctx->enable_1to4_partitions >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_1TO4_PARTITIONS, ctx->enable_1to4_partitions); - if (ctx->enable_ab_partitions >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_AB_PARTITIONS, ctx->enable_ab_partitions); - if (ctx->enable_angle_delta >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_ANGLE_DELTA, ctx->enable_angle_delta); - if (ctx->enable_cfl_intra >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_CFL_INTRA, ctx->enable_cfl_intra); - if (ctx->enable_filter_intra >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_FILTER_INTRA, ctx->enable_filter_intra); - if (ctx->enable_intra_edge_filter >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_INTRA_EDGE_FILTER, ctx->enable_intra_edge_filter); - if (ctx->enable_paeth_intra >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_PAETH_INTRA, ctx->enable_paeth_intra); - if (ctx->enable_smooth_intra >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_SMOOTH_INTRA, ctx->enable_smooth_intra); - if (ctx->enable_palette >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_PALETTE, ctx->enable_palette); - if (ctx->enable_tx64 >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_TX64, ctx->enable_tx64); - if (ctx->enable_flip_idtx >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_FLIP_IDTX, ctx->enable_flip_idtx); - if (ctx->use_intra_dct_only >= 0) - codecctl_int(avctx, AV1E_SET_INTRA_DCT_ONLY, ctx->use_intra_dct_only); - if (ctx->use_inter_dct_only >= 0) - codecctl_int(avctx, AV1E_SET_INTER_DCT_ONLY, ctx->use_inter_dct_only); - if (ctx->use_intra_default_tx_only >= 0) - codecctl_int(avctx, AV1E_SET_INTRA_DEFAULT_TX_ONLY, ctx->use_intra_default_tx_only); - if (ctx->reduced_tx_type_set >= 0) - codecctl_int(avctx, AV1E_SET_REDUCED_TX_TYPE_SET, ctx->reduced_tx_type_set); - if (ctx->enable_ref_frame_mvs >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_REF_FRAME_MVS, ctx->enable_ref_frame_mvs); - if (ctx->enable_reduced_reference_set >= 0) - codecctl_int(avctx, AV1E_SET_REDUCED_REFERENCE_SET, ctx->enable_reduced_reference_set); - if (ctx->enable_diff_wtd_comp >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_DIFF_WTD_COMP, ctx->enable_diff_wtd_comp); - if (ctx->enable_dist_wtd_comp >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_DIST_WTD_COMP, ctx->enable_dist_wtd_comp); - if (ctx->enable_dual_filter >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_DUAL_FILTER, ctx->enable_dual_filter); - if (ctx->enable_interinter_wedge >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_INTERINTER_WEDGE, ctx->enable_interinter_wedge); - if (ctx->enable_masked_comp >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_MASKED_COMP, ctx->enable_masked_comp); - if (ctx->enable_interintra_comp >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_INTERINTRA_COMP, ctx->enable_interintra_comp); - if (ctx->enable_interintra_wedge >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_INTERINTRA_WEDGE, ctx->enable_interintra_wedge); - if (ctx->enable_obmc >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_OBMC, ctx->enable_obmc); - if (ctx->enable_onesided_comp >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_ONESIDED_COMP, ctx->enable_onesided_comp); - if (ctx->enable_smooth_interintra >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_SMOOTH_INTERINTRA, ctx->enable_smooth_interintra); -#endif - - codecctl_int(avctx, AOME_SET_STATIC_THRESHOLD, ctx->static_thresh); - if (ctx->crf >= 0) - codecctl_int(avctx, AOME_SET_CQ_LEVEL, ctx->crf); - if (ctx->tune >= 0) - codecctl_int(avctx, AOME_SET_TUNING, ctx->tune); - - if (desc->flags & AV_PIX_FMT_FLAG_RGB) { - codecctl_int(avctx, AV1E_SET_COLOR_PRIMARIES, AVCOL_PRI_BT709); - codecctl_int(avctx, AV1E_SET_MATRIX_COEFFICIENTS, AVCOL_SPC_RGB); - codecctl_int(avctx, AV1E_SET_TRANSFER_CHARACTERISTICS, AVCOL_TRC_IEC61966_2_1); - } else { - codecctl_int(avctx, AV1E_SET_COLOR_PRIMARIES, avctx->color_primaries); - codecctl_int(avctx, AV1E_SET_MATRIX_COEFFICIENTS, avctx->colorspace); - codecctl_int(avctx, AV1E_SET_TRANSFER_CHARACTERISTICS, avctx->color_trc); - } - if (ctx->aq_mode >= 0) - codecctl_int(avctx, AV1E_SET_AQ_MODE, ctx->aq_mode); - if (ctx->frame_parallel >= 0) - codecctl_int(avctx, AV1E_SET_FRAME_PARALLEL_DECODING, ctx->frame_parallel); - set_color_range(avctx); - - codecctl_int(avctx, AV1E_SET_SUPERBLOCK_SIZE, ctx->superblock_size); - if (ctx->uniform_tiles) { - codecctl_int(avctx, AV1E_SET_TILE_COLUMNS, ctx->tile_cols_log2); - codecctl_int(avctx, AV1E_SET_TILE_ROWS, ctx->tile_rows_log2); - } - -#ifdef AOM_CTRL_AV1E_SET_DENOISE_NOISE_LEVEL - if (ctx->denoise_noise_level >= 0) - codecctl_int(avctx, AV1E_SET_DENOISE_NOISE_LEVEL, ctx->denoise_noise_level); -#endif -#ifdef AOM_CTRL_AV1E_SET_DENOISE_BLOCK_SIZE - if (ctx->denoise_block_size >= 0) - codecctl_int(avctx, AV1E_SET_DENOISE_BLOCK_SIZE, ctx->denoise_block_size); -#endif -#ifdef AOM_CTRL_AV1E_SET_ENABLE_GLOBAL_MOTION - if (ctx->enable_global_motion >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_GLOBAL_MOTION, ctx->enable_global_motion); -#endif -#ifdef AOM_CTRL_AV1E_SET_MAX_REFERENCE_FRAMES - if (avctx->refs >= 3) { - codecctl_int(avctx, AV1E_SET_MAX_REFERENCE_FRAMES, avctx->refs); - } -#endif -#ifdef AOM_CTRL_AV1E_SET_ROW_MT - if (ctx->row_mt >= 0) - codecctl_int(avctx, AV1E_SET_ROW_MT, ctx->row_mt); -#endif -#ifdef AOM_CTRL_AV1E_SET_ENABLE_INTRABC - if (ctx->enable_intrabc >= 0) - codecctl_int(avctx, AV1E_SET_ENABLE_INTRABC, ctx->enable_intrabc); -#endif - -#if AOM_ENCODER_ABI_VERSION >= 23 - { - AVDictionaryEntry *en = NULL; - - while ((en = av_dict_get(ctx->aom_params, "", en, AV_DICT_IGNORE_SUFFIX))) { - int ret = aom_codec_set_option(&ctx->encoder, en->key, en->value); - if (ret != AOM_CODEC_OK) { - log_encoder_error(avctx, en->key); - return AVERROR_EXTERNAL; - } - } - } -#endif - - // provide dummy value to initialize wrapper, values will be updated each _encode() - aom_img_wrap(&ctx->rawimg, img_fmt, avctx->width, avctx->height, 1, - (unsigned char*)1); - - if (codec_caps & AOM_CODEC_CAP_HIGHBITDEPTH) - ctx->rawimg.bit_depth = enccfg.g_bit_depth; - - cpb_props = ff_add_cpb_side_data(avctx); - if (!cpb_props) - return AVERROR(ENOMEM); - - if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) { - const AVBitStreamFilter *filter = av_bsf_get_by_name("extract_extradata"); - int ret; - - if (!filter) { - av_log(avctx, AV_LOG_ERROR, "extract_extradata bitstream filter " - "not found. This is a bug, please report it.\n"); - return AVERROR_BUG; - } - ret = av_bsf_alloc(filter, &ctx->bsf); - if (ret < 0) - return ret; - - ret = avcodec_parameters_from_context(ctx->bsf->par_in, avctx); - if (ret < 0) - return ret; - - ret = av_bsf_init(ctx->bsf); - if (ret < 0) - return ret; - } - - if (enccfg.rc_end_usage == AOM_CBR || - enccfg.g_pass != AOM_RC_ONE_PASS) { - cpb_props->max_bitrate = avctx->rc_max_rate; - cpb_props->min_bitrate = avctx->rc_min_rate; - cpb_props->avg_bitrate = avctx->bit_rate; - } - cpb_props->buffer_size = avctx->rc_buffer_size; - - return 0; -} - -static inline void cx_pktcpy(AOMContext *ctx, - struct FrameListData *dst, - const struct aom_codec_cx_pkt *src) -{ - dst->pts = src->data.frame.pts; - dst->duration = src->data.frame.duration; - dst->flags = src->data.frame.flags; - dst->sz = src->data.frame.sz; - dst->buf = src->data.frame.buf; -#ifdef AOM_FRAME_IS_INTRAONLY - dst->frame_number = ++ctx->frame_number; - dst->have_sse = ctx->have_sse; - if (ctx->have_sse) { - /* associate last-seen SSE to the frame. */ - /* Transfers ownership from ctx to dst. */ - memcpy(dst->sse, ctx->sse, sizeof(dst->sse)); - ctx->have_sse = 0; - } -#endif -} - -/** - * Store coded frame information in format suitable for return from encode2(). - * - * Write information from @a cx_frame to @a pkt - * @return packet data size on success - * @return a negative AVERROR on error - */ -static int storeframe(AVCodecContext *avctx, struct FrameListData *cx_frame, - AVPacket *pkt) -{ - AOMContext *ctx = avctx->priv_data; - int av_unused pict_type; - int ret = ff_get_encode_buffer(avctx, pkt, cx_frame->sz, 0); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, - "Error getting output packet of size %"SIZE_SPECIFIER".\n", cx_frame->sz); - return ret; - } - memcpy(pkt->data, cx_frame->buf, pkt->size); - pkt->pts = pkt->dts = cx_frame->pts; - pkt->duration = cx_frame->duration; - - if (!!(cx_frame->flags & AOM_FRAME_IS_KEY)) { - pkt->flags |= AV_PKT_FLAG_KEY; -#ifdef AOM_FRAME_IS_INTRAONLY - pict_type = AV_PICTURE_TYPE_I; - } else if (cx_frame->flags & AOM_FRAME_IS_INTRAONLY) { - pict_type = AV_PICTURE_TYPE_I; - } else { - pict_type = AV_PICTURE_TYPE_P; - } - - ff_side_data_set_encoder_stats(pkt, 0, cx_frame->sse + 1, - cx_frame->have_sse ? 3 : 0, pict_type); - - if (cx_frame->have_sse) { - int i; - for (i = 0; i < 3; ++i) { - avctx->error[i] += cx_frame->sse[i + 1]; - } - cx_frame->have_sse = 0; -#endif - } - - if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) { - ret = av_bsf_send_packet(ctx->bsf, pkt); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "extract_extradata filter " - "failed to send input packet\n"); - return ret; - } - ret = av_bsf_receive_packet(ctx->bsf, pkt); - - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "extract_extradata filter " - "failed to receive output packet\n"); - return ret; - } - } - return pkt->size; -} - -/** - * Queue multiple output frames from the encoder, returning the front-most. - * In cases where aom_codec_get_cx_data() returns more than 1 frame append - * the frame queue. Return the head frame if available. - * @return Stored frame size - * @return AVERROR(EINVAL) on output size error - * @return AVERROR(ENOMEM) on coded frame queue data allocation error - */ -static int queue_frames(AVCodecContext *avctx, AVPacket *pkt_out) -{ - AOMContext *ctx = avctx->priv_data; - const struct aom_codec_cx_pkt *pkt; - const void *iter = NULL; - int size = 0; - - if (ctx->coded_frame_list) { - struct FrameListData *cx_frame = ctx->coded_frame_list; - /* return the leading frame if we've already begun queueing */ - size = storeframe(avctx, cx_frame, pkt_out); - if (size < 0) - return size; - ctx->coded_frame_list = cx_frame->next; - free_coded_frame(cx_frame); - } - - /* consume all available output from the encoder before returning. buffers - * are only good through the next aom_codec call */ - while ((pkt = aom_codec_get_cx_data(&ctx->encoder, &iter))) { - switch (pkt->kind) { - case AOM_CODEC_CX_FRAME_PKT: - if (!size) { - struct FrameListData cx_frame; - - /* avoid storing the frame when the list is empty and we haven't yet - * provided a frame for output */ - av_assert0(!ctx->coded_frame_list); - cx_pktcpy(ctx, &cx_frame, pkt); - size = storeframe(avctx, &cx_frame, pkt_out); - if (size < 0) - return size; - } else { - struct FrameListData *cx_frame = - av_malloc(sizeof(struct FrameListData)); - - if (!cx_frame) { - av_log(avctx, AV_LOG_ERROR, - "Frame queue element alloc failed\n"); - return AVERROR(ENOMEM); - } - cx_pktcpy(ctx, cx_frame, pkt); - cx_frame->buf = av_malloc(cx_frame->sz); - - if (!cx_frame->buf) { - av_log(avctx, AV_LOG_ERROR, - "Data buffer alloc (%"SIZE_SPECIFIER" bytes) failed\n", - cx_frame->sz); - av_freep(&cx_frame); - return AVERROR(ENOMEM); - } - memcpy(cx_frame->buf, pkt->data.frame.buf, pkt->data.frame.sz); - coded_frame_add(&ctx->coded_frame_list, cx_frame); - } - break; - case AOM_CODEC_STATS_PKT: - { - struct aom_fixed_buf *stats = &ctx->twopass_stats; - uint8_t *tmp = av_fast_realloc(stats->buf, - &ctx->twopass_stats_size, - stats->sz + - pkt->data.twopass_stats.sz); - if (!tmp) { - av_freep(&stats->buf); - stats->sz = 0; - av_log(avctx, AV_LOG_ERROR, "Stat buffer realloc failed\n"); - return AVERROR(ENOMEM); - } - stats->buf = tmp; - memcpy((uint8_t *)stats->buf + stats->sz, - pkt->data.twopass_stats.buf, pkt->data.twopass_stats.sz); - stats->sz += pkt->data.twopass_stats.sz; - break; - } -#ifdef AOM_FRAME_IS_INTRAONLY - case AOM_CODEC_PSNR_PKT: - { - av_assert0(!ctx->have_sse); - ctx->sse[0] = pkt->data.psnr.sse[0]; - ctx->sse[1] = pkt->data.psnr.sse[1]; - ctx->sse[2] = pkt->data.psnr.sse[2]; - ctx->sse[3] = pkt->data.psnr.sse[3]; - ctx->have_sse = 1; - break; - } -#endif - case AOM_CODEC_CUSTOM_PKT: - // ignore unsupported/unrecognized packet types - break; - } - } - - return size; -} - -static enum AVPixelFormat aomfmt_to_pixfmt(struct aom_image *img) -{ - switch (img->fmt) { - case AOM_IMG_FMT_I420: - case AOM_IMG_FMT_I42016: - if (img->bit_depth == 8) - return img->monochrome ? AV_PIX_FMT_GRAY8 : AV_PIX_FMT_YUV420P; - else if (img->bit_depth == 10) - return img->monochrome ? AV_PIX_FMT_GRAY10 : AV_PIX_FMT_YUV420P10; - else - return img->monochrome ? AV_PIX_FMT_GRAY12 : AV_PIX_FMT_YUV420P12; - case AOM_IMG_FMT_I422: - case AOM_IMG_FMT_I42216: - if (img->bit_depth == 8) - return AV_PIX_FMT_YUV422P; - else if (img->bit_depth == 10) - return AV_PIX_FMT_YUV422P10; - else - return AV_PIX_FMT_YUV422P12; - case AOM_IMG_FMT_I444: - case AOM_IMG_FMT_I44416: - if (img->bit_depth == 8) - return AV_PIX_FMT_YUV444P; - else if (img->bit_depth == 10) - return AV_PIX_FMT_YUV444P10; - else - return AV_PIX_FMT_YUV444P12; - }; - return AV_PIX_FMT_NONE; -} - -static int aom_encode(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *frame, int *got_packet) -{ - AOMContext *ctx = avctx->priv_data; - struct aom_image *rawimg = NULL; - int64_t timestamp = 0; - unsigned long duration = 0; - int res, coded_size; - aom_enc_frame_flags_t flags = 0; - - if (frame) { - rawimg = &ctx->rawimg; - rawimg->planes[AOM_PLANE_Y] = frame->data[0]; - rawimg->planes[AOM_PLANE_U] = frame->data[1]; - rawimg->planes[AOM_PLANE_V] = frame->data[2]; - rawimg->stride[AOM_PLANE_Y] = frame->linesize[0]; - rawimg->stride[AOM_PLANE_U] = frame->linesize[1]; - rawimg->stride[AOM_PLANE_V] = frame->linesize[2]; - timestamp = frame->pts; - - if (frame->duration > ULONG_MAX) { - av_log(avctx, AV_LOG_WARNING, - "Frame duration too large: %"PRId64"\n", frame->duration); - } else - duration = frame->duration ? frame->duration : avctx->ticks_per_frame; - - switch (frame->color_range) { - case AVCOL_RANGE_MPEG: - rawimg->range = AOM_CR_STUDIO_RANGE; - break; - case AVCOL_RANGE_JPEG: - rawimg->range = AOM_CR_FULL_RANGE; - break; - } - - if (frame->pict_type == AV_PICTURE_TYPE_I) - flags |= AOM_EFLAG_FORCE_KF; - } - - res = aom_codec_encode(&ctx->encoder, rawimg, timestamp, duration, flags); - if (res != AOM_CODEC_OK) { - log_encoder_error(avctx, "Error encoding frame"); - return AVERROR_INVALIDDATA; - } - coded_size = queue_frames(avctx, pkt); - if (coded_size < 0) - return coded_size; - - if (!frame && avctx->flags & AV_CODEC_FLAG_PASS1) { - size_t b64_size = AV_BASE64_SIZE(ctx->twopass_stats.sz); - - avctx->stats_out = av_malloc(b64_size); - if (!avctx->stats_out) { - av_log(avctx, AV_LOG_ERROR, "Stat buffer alloc (%"SIZE_SPECIFIER" bytes) failed\n", - b64_size); - return AVERROR(ENOMEM); - } - av_base64_encode(avctx->stats_out, b64_size, ctx->twopass_stats.buf, - ctx->twopass_stats.sz); - } - - *got_packet = !!coded_size; - - if (*got_packet && avctx->flags & AV_CODEC_FLAG_RECON_FRAME) { - AVCodecInternal *avci = avctx->internal; - struct aom_image img; - - av_frame_unref(avci->recon_frame); - - res = codecctl_imgp(avctx, AV1_GET_NEW_FRAME_IMAGE, &img); - if (res < 0) - return res; - - avci->recon_frame->format = aomfmt_to_pixfmt(&img); - if (avci->recon_frame->format == AV_PIX_FMT_NONE) { - av_log(ctx, AV_LOG_ERROR, - "Unhandled reconstructed frame colorspace: %d\n", - img.fmt); - return AVERROR(ENOSYS); - } - - avci->recon_frame->width = img.d_w; - avci->recon_frame->height = img.d_h; - - res = av_frame_get_buffer(avci->recon_frame, 0); - if (res < 0) - return res; - - if ((img.fmt & AOM_IMG_FMT_HIGHBITDEPTH) && img.bit_depth == 8) - ff_aom_image_copy_16_to_8(avci->recon_frame, &img); - else { - const uint8_t *planes[4] = { img.planes[0], img.planes[1], img.planes[2] }; - const int stride[4] = { img.stride[0], img.stride[1], img.stride[2] }; - - av_image_copy(avci->recon_frame->data, avci->recon_frame->linesize, planes, - stride, avci->recon_frame->format, img.d_w, img.d_h); - } - } - - return 0; -} - -static const enum AVPixelFormat av1_pix_fmts[] = { - AV_PIX_FMT_YUV420P, - AV_PIX_FMT_YUV422P, - AV_PIX_FMT_YUV444P, - AV_PIX_FMT_GBRP, - AV_PIX_FMT_NONE -}; - -static const enum AVPixelFormat av1_pix_fmts_with_gray[] = { - AV_PIX_FMT_YUV420P, - AV_PIX_FMT_YUV422P, - AV_PIX_FMT_YUV444P, - AV_PIX_FMT_GBRP, - AV_PIX_FMT_GRAY8, - AV_PIX_FMT_NONE -}; - -static const enum AVPixelFormat av1_pix_fmts_highbd[] = { - AV_PIX_FMT_YUV420P, - AV_PIX_FMT_YUV422P, - AV_PIX_FMT_YUV444P, - AV_PIX_FMT_GBRP, - AV_PIX_FMT_YUV420P10, - AV_PIX_FMT_YUV422P10, - AV_PIX_FMT_YUV444P10, - AV_PIX_FMT_YUV420P12, - AV_PIX_FMT_YUV422P12, - AV_PIX_FMT_YUV444P12, - AV_PIX_FMT_GBRP10, - AV_PIX_FMT_GBRP12, - AV_PIX_FMT_NONE -}; - -static const enum AVPixelFormat av1_pix_fmts_highbd_with_gray[] = { - AV_PIX_FMT_YUV420P, - AV_PIX_FMT_YUV422P, - AV_PIX_FMT_YUV444P, - AV_PIX_FMT_GBRP, - AV_PIX_FMT_YUV420P10, - AV_PIX_FMT_YUV422P10, - AV_PIX_FMT_YUV444P10, - AV_PIX_FMT_YUV420P12, - AV_PIX_FMT_YUV422P12, - AV_PIX_FMT_YUV444P12, - AV_PIX_FMT_GBRP10, - AV_PIX_FMT_GBRP12, - AV_PIX_FMT_GRAY8, - AV_PIX_FMT_GRAY10, - AV_PIX_FMT_GRAY12, - AV_PIX_FMT_NONE -}; - -static av_cold void av1_init_static(FFCodec *codec) -{ - int supports_monochrome = aom_codec_version() >= 20001; - aom_codec_caps_t codec_caps = aom_codec_get_caps(aom_codec_av1_cx()); - if (codec_caps & AOM_CODEC_CAP_HIGHBITDEPTH) - codec->p.pix_fmts = supports_monochrome ? av1_pix_fmts_highbd_with_gray : - av1_pix_fmts_highbd; - else - codec->p.pix_fmts = supports_monochrome ? av1_pix_fmts_with_gray : - av1_pix_fmts; - - if (aom_codec_version_major() < 2) - codec->p.capabilities |= AV_CODEC_CAP_EXPERIMENTAL; -} - -static av_cold int av1_init(AVCodecContext *avctx) -{ - return aom_init(avctx, aom_codec_av1_cx()); -} - -#define OFFSET(x) offsetof(AOMContext, x) -#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption options[] = { - { "cpu-used", "Quality/Speed ratio modifier", OFFSET(cpu_used), AV_OPT_TYPE_INT, {.i64 = 1}, 0, 8, VE}, - { "auto-alt-ref", "Enable use of alternate reference " - "frames (2-pass only)", OFFSET(auto_alt_ref), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 2, VE}, - { "lag-in-frames", "Number of frames to look ahead at for " - "alternate reference frame selection", OFFSET(lag_in_frames), AV_OPT_TYPE_INT, {.i64 = -1}, -1, INT_MAX, VE}, - { "arnr-max-frames", "altref noise reduction max frame count", OFFSET(arnr_max_frames), AV_OPT_TYPE_INT, {.i64 = -1}, -1, INT_MAX, VE}, - { "arnr-strength", "altref noise reduction filter strength", OFFSET(arnr_strength), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 6, VE}, - { "aq-mode", "adaptive quantization mode", OFFSET(aq_mode), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 4, VE, "aq_mode"}, - { "none", "Aq not used", 0, AV_OPT_TYPE_CONST, {.i64 = 0}, 0, 0, VE, "aq_mode"}, - { "variance", "Variance based Aq", 0, AV_OPT_TYPE_CONST, {.i64 = 1}, 0, 0, VE, "aq_mode"}, - { "complexity", "Complexity based Aq", 0, AV_OPT_TYPE_CONST, {.i64 = 2}, 0, 0, VE, "aq_mode"}, - { "cyclic", "Cyclic Refresh Aq", 0, AV_OPT_TYPE_CONST, {.i64 = 3}, 0, 0, VE, "aq_mode"}, - { "error-resilience", "Error resilience configuration", OFFSET(error_resilient), AV_OPT_TYPE_FLAGS, {.i64 = 0}, INT_MIN, INT_MAX, VE, "er"}, - { "default", "Improve resiliency against losses of whole frames", 0, AV_OPT_TYPE_CONST, {.i64 = AOM_ERROR_RESILIENT_DEFAULT}, 0, 0, VE, "er"}, - { "crf", "Select the quality for constant quality mode", offsetof(AOMContext, crf), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 63, VE }, - { "static-thresh", "A change threshold on blocks below which they will be skipped by the encoder", OFFSET(static_thresh), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, VE }, - { "drop-threshold", "Frame drop threshold", offsetof(AOMContext, drop_threshold), AV_OPT_TYPE_INT, {.i64 = 0 }, INT_MIN, INT_MAX, VE }, - { "denoise-noise-level", "Amount of noise to be removed", OFFSET(denoise_noise_level), AV_OPT_TYPE_INT, {.i64 = -1}, -1, INT_MAX, VE}, - { "denoise-block-size", "Denoise block size ", OFFSET(denoise_block_size), AV_OPT_TYPE_INT, {.i64 = -1}, -1, INT_MAX, VE}, - { "undershoot-pct", "Datarate undershoot (min) target (%)", OFFSET(rc_undershoot_pct), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 100, VE}, - { "overshoot-pct", "Datarate overshoot (max) target (%)", OFFSET(rc_overshoot_pct), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 1000, VE}, - { "minsection-pct", "GOP min bitrate (% of target)", OFFSET(minsection_pct), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 100, VE}, - { "maxsection-pct", "GOP max bitrate (% of target)", OFFSET(maxsection_pct), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 5000, VE}, - { "frame-parallel", "Enable frame parallel decodability features", OFFSET(frame_parallel), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "tiles", "Tile columns x rows", OFFSET(tile_cols), AV_OPT_TYPE_IMAGE_SIZE, { .str = NULL }, 0, 0, VE }, - { "tile-columns", "Log2 of number of tile columns to use", OFFSET(tile_cols_log2), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 6, VE}, - { "tile-rows", "Log2 of number of tile rows to use", OFFSET(tile_rows_log2), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 6, VE}, - { "row-mt", "Enable row based multi-threading", OFFSET(row_mt), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-cdef", "Enable CDEF filtering", OFFSET(enable_cdef), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-global-motion", "Enable global motion", OFFSET(enable_global_motion), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-intrabc", "Enable intra block copy prediction mode", OFFSET(enable_intrabc), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-restoration", "Enable Loop Restoration filtering", OFFSET(enable_restoration), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "usage", "Quality and compression efficiency vs speed trade-off", OFFSET(usage), AV_OPT_TYPE_INT, {.i64 = 0}, 0, INT_MAX, VE, "usage"}, - { "good", "Good quality", 0, AV_OPT_TYPE_CONST, {.i64 = 0 /* AOM_USAGE_GOOD_QUALITY */}, 0, 0, VE, "usage"}, - { "realtime", "Realtime encoding", 0, AV_OPT_TYPE_CONST, {.i64 = 1 /* AOM_USAGE_REALTIME */}, 0, 0, VE, "usage"}, - { "allintra", "All Intra encoding", 0, AV_OPT_TYPE_CONST, {.i64 = 2 /* AOM_USAGE_ALL_INTRA */}, 0, 0, VE, "usage"}, - { "tune", "The metric that the encoder tunes for. Automatically chosen by the encoder by default", OFFSET(tune), AV_OPT_TYPE_INT, {.i64 = -1}, -1, AOM_TUNE_SSIM, VE, "tune"}, - { "psnr", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = AOM_TUNE_PSNR}, 0, 0, VE, "tune"}, - { "ssim", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = AOM_TUNE_SSIM}, 0, 0, VE, "tune"}, - FF_AV1_PROFILE_OPTS - { "still-picture", "Encode in single frame mode (typically used for still AVIF images).", OFFSET(still_picture), AV_OPT_TYPE_BOOL, {.i64 = 0}, -1, 1, VE }, - { "enable-rect-partitions", "Enable rectangular partitions", OFFSET(enable_rect_partitions), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-1to4-partitions", "Enable 1:4/4:1 partitions", OFFSET(enable_1to4_partitions), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-ab-partitions", "Enable ab shape partitions", OFFSET(enable_ab_partitions), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-angle-delta", "Enable angle delta intra prediction", OFFSET(enable_angle_delta), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-cfl-intra", "Enable chroma predicted from luma intra prediction", OFFSET(enable_cfl_intra), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-filter-intra", "Enable filter intra predictor", OFFSET(enable_filter_intra), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-intra-edge-filter", "Enable intra edge filter", OFFSET(enable_intra_edge_filter), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-smooth-intra", "Enable smooth intra prediction mode", OFFSET(enable_smooth_intra), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-paeth-intra", "Enable paeth predictor in intra prediction", OFFSET(enable_paeth_intra), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-palette", "Enable palette prediction mode", OFFSET(enable_palette), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-flip-idtx", "Enable extended transform type", OFFSET(enable_flip_idtx), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-tx64", "Enable 64-pt transform", OFFSET(enable_tx64), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "reduced-tx-type-set", "Use reduced set of transform types", OFFSET(reduced_tx_type_set), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "use-intra-dct-only", "Use DCT only for INTRA modes", OFFSET(use_intra_dct_only), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "use-inter-dct-only", "Use DCT only for INTER modes", OFFSET(use_inter_dct_only), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "use-intra-default-tx-only", "Use default-transform only for INTRA modes", OFFSET(use_intra_default_tx_only), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-ref-frame-mvs", "Enable temporal mv prediction", OFFSET(enable_ref_frame_mvs), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-reduced-reference-set", "Use reduced set of single and compound references", OFFSET(enable_reduced_reference_set), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-obmc", "Enable obmc", OFFSET(enable_obmc), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-dual-filter", "Enable dual filter", OFFSET(enable_dual_filter), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-diff-wtd-comp", "Enable difference-weighted compound", OFFSET(enable_diff_wtd_comp), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-dist-wtd-comp", "Enable distance-weighted compound", OFFSET(enable_dist_wtd_comp), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-onesided-comp", "Enable one sided compound", OFFSET(enable_onesided_comp), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-interinter-wedge", "Enable interinter wedge compound", OFFSET(enable_interinter_wedge), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-interintra-wedge", "Enable interintra wedge compound", OFFSET(enable_interintra_wedge), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-masked-comp", "Enable masked compound", OFFSET(enable_masked_comp), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-interintra-comp", "Enable interintra compound", OFFSET(enable_interintra_comp), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, - { "enable-smooth-interintra", "Enable smooth interintra mode", OFFSET(enable_smooth_interintra), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE}, -#if AOM_ENCODER_ABI_VERSION >= 23 - { "aom-params", "Set libaom options using a :-separated list of key=value pairs", OFFSET(aom_params), AV_OPT_TYPE_DICT, { 0 }, 0, 0, VE }, -#endif - { NULL }, -}; - -static const FFCodecDefault defaults[] = { - { "b", "0" }, - { "qmin", "-1" }, - { "qmax", "-1" }, - { "g", "-1" }, - { "keyint_min", "-1" }, - { NULL }, -}; - -static const AVClass class_aom = { - .class_name = "libaom-av1 encoder", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -FFCodec ff_libaom_av1_encoder = { - .p.name = "libaom-av1", - CODEC_LONG_NAME("libaom AV1"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_AV1, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_ENCODER_RECON_FRAME | - AV_CODEC_CAP_OTHER_THREADS, - .p.profiles = NULL_IF_CONFIG_SMALL(ff_av1_profiles), - .p.priv_class = &class_aom, - .p.wrapper_name = "libaom", - .priv_data_size = sizeof(AOMContext), - .init = av1_init, - FF_CODEC_ENCODE_CB(aom_encode), - .close = aom_free, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE | - FF_CODEC_CAP_INIT_CLEANUP | - FF_CODEC_CAP_AUTO_THREADS, - .defaults = defaults, - .init_static_data = av1_init_static, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/aacsbr_mips.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/aacsbr_mips.h deleted file mode 100644 index 4750c94024dec2e6ca21e6ab1f8ff1212b6e4656..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/aacsbr_mips.h +++ /dev/null @@ -1,496 +0,0 @@ -/* - * Copyright (c) 2012 - * MIPS Technologies, Inc., California. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * Authors: Djordje Pesut (djordje@mips.com) - * Mirjana Vulin (mvulin@mips.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Reference: libavcodec/aacsbr.c - */ - -#ifndef AVCODEC_MIPS_AACSBR_MIPS_H -#define AVCODEC_MIPS_AACSBR_MIPS_H - -#include "libavcodec/aac.h" -#include "libavcodec/sbr.h" -#include "libavutil/mips/asmdefs.h" - -#if HAVE_INLINE_ASM -static void sbr_qmf_analysis_mips(AVFloatDSPContext *fdsp, AVTXContext *mdct, av_tx_fn mdct_fn, - SBRDSPContext *sbrdsp, const float *in, float *x, - float z[320], float W[2][32][32][2], int buf_idx) -{ - int i; - float *w0; - float *w1; - int temp0, temp1, temp2, temp3, temp4, temp5, temp6, temp7; - - w0 = x; - w1 = x + 1024; - for(i = 0; i < 36; i++) - { - /* loop unrolled 8 times */ - __asm__ volatile( - "lw %[temp0], 0(%[w1]) \n\t" - "lw %[temp1], 4(%[w1]) \n\t" - "lw %[temp2], 8(%[w1]) \n\t" - "lw %[temp3], 12(%[w1]) \n\t" - "lw %[temp4], 16(%[w1]) \n\t" - "lw %[temp5], 20(%[w1]) \n\t" - "lw %[temp6], 24(%[w1]) \n\t" - "lw %[temp7], 28(%[w1]) \n\t" - "sw %[temp0], 0(%[w0]) \n\t" - "sw %[temp1], 4(%[w0]) \n\t" - "sw %[temp2], 8(%[w0]) \n\t" - "sw %[temp3], 12(%[w0]) \n\t" - "sw %[temp4], 16(%[w0]) \n\t" - "sw %[temp5], 20(%[w0]) \n\t" - "sw %[temp6], 24(%[w0]) \n\t" - "sw %[temp7], 28(%[w0]) \n\t" - PTR_ADDIU " %[w0], %[w0], 32 \n\t" - PTR_ADDIU " %[w1], %[w1], 32 \n\t" - - : [w0]"+r"(w0), [w1]"+r"(w1), - [temp0]"=&r"(temp0), [temp1]"=&r"(temp1), - [temp2]"=&r"(temp2), [temp3]"=&r"(temp3), - [temp4]"=&r"(temp4), [temp5]"=&r"(temp5), - [temp6]"=&r"(temp6), [temp7]"=&r"(temp7) - : - : "memory" - ); - } - - w0 = x + 288; - w1 = (float*)in; - for(i = 0; i < 128; i++) - { - /* loop unrolled 8 times */ - __asm__ volatile( - "lw %[temp0], 0(%[w1]) \n\t" - "lw %[temp1], 4(%[w1]) \n\t" - "lw %[temp2], 8(%[w1]) \n\t" - "lw %[temp3], 12(%[w1]) \n\t" - "lw %[temp4], 16(%[w1]) \n\t" - "lw %[temp5], 20(%[w1]) \n\t" - "lw %[temp6], 24(%[w1]) \n\t" - "lw %[temp7], 28(%[w1]) \n\t" - "sw %[temp0], 0(%[w0]) \n\t" - "sw %[temp1], 4(%[w0]) \n\t" - "sw %[temp2], 8(%[w0]) \n\t" - "sw %[temp3], 12(%[w0]) \n\t" - "sw %[temp4], 16(%[w0]) \n\t" - "sw %[temp5], 20(%[w0]) \n\t" - "sw %[temp6], 24(%[w0]) \n\t" - "sw %[temp7], 28(%[w0]) \n\t" - PTR_ADDIU " %[w0], %[w0], 32 \n\t" - PTR_ADDIU " %[w1], %[w1], 32 \n\t" - - : [w0]"+r"(w0), [w1]"+r"(w1), - [temp0]"=&r"(temp0), [temp1]"=&r"(temp1), - [temp2]"=&r"(temp2), [temp3]"=&r"(temp3), - [temp4]"=&r"(temp4), [temp5]"=&r"(temp5), - [temp6]"=&r"(temp6), [temp7]"=&r"(temp7) - : - : "memory" - ); - } - - for (i = 0; i < 32; i++) { // numTimeSlots*RATE = 16*2 as 960 sample frames - // are not supported - fdsp->vector_fmul_reverse(z, sbr_qmf_window_ds, x, 320); - sbrdsp->sum64x5(z); - sbrdsp->qmf_pre_shuffle(z); - mdct_fn(mdct, z, z+64, sizeof(float)); - sbrdsp->qmf_post_shuffle(W[buf_idx][i], z); - x += 32; - } -} - -#if HAVE_MIPSFPU -#if !HAVE_MIPS32R6 && !HAVE_MIPS64R6 -static void sbr_qmf_synthesis_mips(AVTXContext *mdct, av_tx_fn mdct_fn, - SBRDSPContext *sbrdsp, AVFloatDSPContext *fdsp, - float *out, float X[2][38][64], - float mdct_buf[2][64], - float *v0, int *v_off, const unsigned int div) -{ - int i, n; - const float *sbr_qmf_window = div ? sbr_qmf_window_ds : sbr_qmf_window_us; - const int step = 128 >> div; - float *v; - float temp0, temp1, temp2, temp3, temp4, temp5, temp6, temp7, temp8, temp9, temp10, temp11, temp12, temp13; - float temp14, temp15, temp16, temp17, temp18, temp19; - float *vv0, *s0, *dst; - dst = out; - - for (i = 0; i < 32; i++) { - if (*v_off < step) { - int saved_samples = (1280 - 128) >> div; - memcpy(&v0[SBR_SYNTHESIS_BUF_SIZE - saved_samples], v0, saved_samples * sizeof(float)); - *v_off = SBR_SYNTHESIS_BUF_SIZE - saved_samples - step; - } else { - *v_off -= step; - } - v = v0 + *v_off; - if (div) { - for (n = 0; n < 32; n++) { - X[0][i][ n] = -X[0][i][n]; - X[0][i][32+n] = X[1][i][31-n]; - } - mdct_fn(mdct, mdct_buf[0], X[0][i], sizeof(float)); - sbrdsp->qmf_deint_neg(v, mdct_buf[0]); - } else { - sbrdsp->neg_odd_64(X[1][i]); - mdct_fn(mdct, mdct_buf[0], X[0][i], sizeof(float)); - mdct_fn(mdct, mdct_buf[1], X[1][i], sizeof(float)); - sbrdsp->qmf_deint_bfly(v, mdct_buf[1], mdct_buf[0]); - } - - if(div == 0) - { - float *v0_end; - vv0 = v; - v0_end = v + 60; - s0 = (float*)sbr_qmf_window; - - /* 10 calls of function vector_fmul_add merged into one loop - and loop unrolled 4 times */ - __asm__ volatile( - ".set push \n\t" - ".set noreorder \n\t" - "lwc1 %[temp4], 0(%[v0]) \n\t" - "lwc1 %[temp5], 0(%[s0]) \n\t" - "lwc1 %[temp6], 4(%[v0]) \n\t" - "lwc1 %[temp7], 4(%[s0]) \n\t" - "lwc1 %[temp8], 8(%[v0]) \n\t" - "lwc1 %[temp9], 8(%[s0]) \n\t" - "lwc1 %[temp10], 12(%[v0]) \n\t" - "lwc1 %[temp11], 12(%[s0]) \n\t" - "lwc1 %[temp12], 768(%[v0]) \n\t" - "lwc1 %[temp13], 256(%[s0]) \n\t" - "lwc1 %[temp14], 772(%[v0]) \n\t" - "lwc1 %[temp15], 260(%[s0]) \n\t" - "lwc1 %[temp16], 776(%[v0]) \n\t" - "lwc1 %[temp17], 264(%[s0]) \n\t" - "lwc1 %[temp18], 780(%[v0]) \n\t" - "lwc1 %[temp19], 268(%[s0]) \n\t" - "1: \n\t" - "mul.s %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 1024(%[v0]) \n\t" - "mul.s %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 512(%[s0]) \n\t" - "mul.s %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 1028(%[v0]) \n\t" - "mul.s %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 516(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 1032(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 520(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 1036(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 524(%[s0]) \n\t" - "lwc1 %[temp12], 1792(%[v0]) \n\t" - "lwc1 %[temp13], 768(%[s0]) \n\t" - "lwc1 %[temp14], 1796(%[v0]) \n\t" - "lwc1 %[temp15], 772(%[s0]) \n\t" - "lwc1 %[temp16], 1800(%[v0]) \n\t" - "lwc1 %[temp17], 776(%[s0]) \n\t" - "lwc1 %[temp18], 1804(%[v0]) \n\t" - "lwc1 %[temp19], 780(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 2048(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 1024(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 2052(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 1028(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 2056(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 1032(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 2060(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 1036(%[s0]) \n\t" - "lwc1 %[temp12], 2816(%[v0]) \n\t" - "lwc1 %[temp13], 1280(%[s0]) \n\t" - "lwc1 %[temp14], 2820(%[v0]) \n\t" - "lwc1 %[temp15], 1284(%[s0]) \n\t" - "lwc1 %[temp16], 2824(%[v0]) \n\t" - "lwc1 %[temp17], 1288(%[s0]) \n\t" - "lwc1 %[temp18], 2828(%[v0]) \n\t" - "lwc1 %[temp19], 1292(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 3072(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 1536(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 3076(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 1540(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 3080(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 1544(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 3084(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 1548(%[s0]) \n\t" - "lwc1 %[temp12], 3840(%[v0]) \n\t" - "lwc1 %[temp13], 1792(%[s0]) \n\t" - "lwc1 %[temp14], 3844(%[v0]) \n\t" - "lwc1 %[temp15], 1796(%[s0]) \n\t" - "lwc1 %[temp16], 3848(%[v0]) \n\t" - "lwc1 %[temp17], 1800(%[s0]) \n\t" - "lwc1 %[temp18], 3852(%[v0]) \n\t" - "lwc1 %[temp19], 1804(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 4096(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 2048(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 4100(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 2052(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 4104(%[v0]) \n\t" - PTR_ADDIU "%[dst], %[dst], 16 \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 2056(%[s0]) \n\t" - PTR_ADDIU " %[s0], %[s0], 16 \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 4108(%[v0]) \n\t" - PTR_ADDIU " %[v0], %[v0], 16 \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 2044(%[s0]) \n\t" - "lwc1 %[temp12], 4848(%[v0]) \n\t" - "lwc1 %[temp13], 2288(%[s0]) \n\t" - "lwc1 %[temp14], 4852(%[v0]) \n\t" - "lwc1 %[temp15], 2292(%[s0]) \n\t" - "lwc1 %[temp16], 4856(%[v0]) \n\t" - "lwc1 %[temp17], 2296(%[s0]) \n\t" - "lwc1 %[temp18], 4860(%[v0]) \n\t" - "lwc1 %[temp19], 2300(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 0(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 0(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 4(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 4(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 8(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 8(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 12(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 12(%[s0]) \n\t" - "lwc1 %[temp12], 768(%[v0]) \n\t" - "lwc1 %[temp13], 256(%[s0]) \n\t" - "lwc1 %[temp14], 772(%[v0]) \n\t" - "lwc1 %[temp15], 260(%[s0]) \n\t" - "lwc1 %[temp16], 776(%[v0]) \n\t" - "lwc1 %[temp17], 264(%[s0]) \n\t" - "lwc1 %[temp18], 780(%[v0]) \n\t" - "lwc1 %[temp19], 268(%[s0]) \n\t" - "swc1 %[temp0], -16(%[dst]) \n\t" - "swc1 %[temp1], -12(%[dst]) \n\t" - "swc1 %[temp2], -8(%[dst]) \n\t" - "bne %[v0], %[v0_end], 1b \n\t" - " swc1 %[temp3], -4(%[dst]) \n\t" - "mul.s %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 1024(%[v0]) \n\t" - "mul.s %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 512(%[s0]) \n\t" - "mul.s %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 1028(%[v0]) \n\t" - "mul.s %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 516(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 1032(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 520(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 1036(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 524(%[s0]) \n\t" - "lwc1 %[temp12], 1792(%[v0]) \n\t" - "lwc1 %[temp13], 768(%[s0]) \n\t" - "lwc1 %[temp14], 1796(%[v0]) \n\t" - "lwc1 %[temp15], 772(%[s0]) \n\t" - "lwc1 %[temp16], 1800(%[v0]) \n\t" - "lwc1 %[temp17], 776(%[s0]) \n\t" - "lwc1 %[temp18], 1804(%[v0]) \n\t" - "lwc1 %[temp19], 780(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 2048(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 1024(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 2052(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 1028(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 2056(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 1032(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 2060(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 1036(%[s0]) \n\t" - "lwc1 %[temp12], 2816(%[v0]) \n\t" - "lwc1 %[temp13], 1280(%[s0]) \n\t" - "lwc1 %[temp14], 2820(%[v0]) \n\t" - "lwc1 %[temp15], 1284(%[s0]) \n\t" - "lwc1 %[temp16], 2824(%[v0]) \n\t" - "lwc1 %[temp17], 1288(%[s0]) \n\t" - "lwc1 %[temp18], 2828(%[v0]) \n\t" - "lwc1 %[temp19], 1292(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 3072(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 1536(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 3076(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 1540(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 3080(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 1544(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 3084(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 1548(%[s0]) \n\t" - "lwc1 %[temp12], 3840(%[v0]) \n\t" - "lwc1 %[temp13], 1792(%[s0]) \n\t" - "lwc1 %[temp14], 3844(%[v0]) \n\t" - "lwc1 %[temp15], 1796(%[s0]) \n\t" - "lwc1 %[temp16], 3848(%[v0]) \n\t" - "lwc1 %[temp17], 1800(%[s0]) \n\t" - "lwc1 %[temp18], 3852(%[v0]) \n\t" - "lwc1 %[temp19], 1804(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp4], 4096(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp5], 2048(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp6], 4100(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp7], 2052(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - "lwc1 %[temp8], 4104(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "lwc1 %[temp9], 2056(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "lwc1 %[temp10], 4108(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "lwc1 %[temp11], 2060(%[s0]) \n\t" - "lwc1 %[temp12], 4864(%[v0]) \n\t" - "lwc1 %[temp13], 2304(%[s0]) \n\t" - "lwc1 %[temp14], 4868(%[v0]) \n\t" - "lwc1 %[temp15], 2308(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp4], %[temp5] \n\t" - "lwc1 %[temp16], 4872(%[v0]) \n\t" - "madd.s %[temp1], %[temp1], %[temp6], %[temp7] \n\t" - "lwc1 %[temp17], 2312(%[s0]) \n\t" - "madd.s %[temp2], %[temp2], %[temp8], %[temp9] \n\t" - "lwc1 %[temp18], 4876(%[v0]) \n\t" - "madd.s %[temp3], %[temp3], %[temp10], %[temp11] \n\t" - "lwc1 %[temp19], 2316(%[s0]) \n\t" - "madd.s %[temp0], %[temp0], %[temp12], %[temp13] \n\t" - PTR_ADDIU "%[dst], %[dst], 16 \n\t" - "madd.s %[temp1], %[temp1], %[temp14], %[temp15] \n\t" - "madd.s %[temp2], %[temp2], %[temp16], %[temp17] \n\t" - "madd.s %[temp3], %[temp3], %[temp18], %[temp19] \n\t" - "swc1 %[temp0], -16(%[dst]) \n\t" - "swc1 %[temp1], -12(%[dst]) \n\t" - "swc1 %[temp2], -8(%[dst]) \n\t" - "swc1 %[temp3], -4(%[dst]) \n\t" - ".set pop \n\t" - - : [dst]"+r"(dst), [v0]"+r"(vv0), [s0]"+r"(s0), - [temp0]"=&f"(temp0), [temp1]"=&f"(temp1), [temp2]"=&f"(temp2), - [temp3]"=&f"(temp3), [temp4]"=&f"(temp4), [temp5]"=&f"(temp5), - [temp6]"=&f"(temp6), [temp7]"=&f"(temp7), [temp8]"=&f"(temp8), - [temp9]"=&f"(temp9), [temp10]"=&f"(temp10), [temp11]"=&f"(temp11), - [temp12]"=&f"(temp12), [temp13]"=&f"(temp13), [temp14]"=&f"(temp14), - [temp15]"=&f"(temp15), [temp16]"=&f"(temp16), [temp17]"=&f"(temp17), - [temp18]"=&f"(temp18), [temp19]"=&f"(temp19) - : [v0_end]"r"(v0_end) - : "memory" - ); - } - else - { - fdsp->vector_fmul (out, v , sbr_qmf_window , 64 >> div); - fdsp->vector_fmul_add(out, v + ( 192 >> div), sbr_qmf_window + ( 64 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + ( 256 >> div), sbr_qmf_window + (128 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + ( 448 >> div), sbr_qmf_window + (192 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + ( 512 >> div), sbr_qmf_window + (256 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + ( 704 >> div), sbr_qmf_window + (320 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + ( 768 >> div), sbr_qmf_window + (384 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + ( 960 >> div), sbr_qmf_window + (448 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + (1024 >> div), sbr_qmf_window + (512 >> div), out , 64 >> div); - fdsp->vector_fmul_add(out, v + (1216 >> div), sbr_qmf_window + (576 >> div), out , 64 >> div); - out += 64 >> div; - } - } -} - -#define sbr_qmf_analysis sbr_qmf_analysis_mips -#define sbr_qmf_synthesis sbr_qmf_synthesis_mips - -#endif /* !HAVE_MIPS32R6 && !HAVE_MIPS64R6 */ -#endif /* HAVE_MIPSFPU */ -#endif /* HAVE_INLINE_ASM */ - -#endif /* AVCODEC_MIPS_AACSBR_MIPS_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_msa.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_msa.c deleted file mode 100644 index aa9ef770ebd382abd8afd03f1abbab6ca67563b4..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_msa.c +++ /dev/null @@ -1,250 +0,0 @@ -/* - * Copyright (c) 2015 Manojkumar Bhosale (Manojkumar.Bhosale@imgtec.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/mips/generic_macros_msa.h" -#include "h263dsp_mips.h" - -static void h263_dct_unquantize_msa(int16_t *block, int16_t qmul, - int16_t qadd, int8_t n_coeffs, - uint8_t loop_start) -{ - int16_t *block_dup = block; - int32_t level, cnt; - v8i16 block_vec, qmul_vec, qadd_vec, sub; - v8i16 add, mask, mul, zero_mask; - - qmul_vec = __msa_fill_h(qmul); - qadd_vec = __msa_fill_h(qadd); - for (cnt = 0; cnt < (n_coeffs >> 3); cnt++) { - block_vec = LD_SH(block_dup + loop_start); - mask = __msa_clti_s_h(block_vec, 0); - zero_mask = __msa_ceqi_h(block_vec, 0); - mul = block_vec * qmul_vec; - sub = mul - qadd_vec; - add = mul + qadd_vec; - add = (v8i16) __msa_bmnz_v((v16u8) add, (v16u8) sub, (v16u8) mask); - block_vec = (v8i16) __msa_bmnz_v((v16u8) add, (v16u8) block_vec, - (v16u8) zero_mask); - ST_SH(block_vec, block_dup + loop_start); - block_dup += 8; - } - - cnt = ((n_coeffs >> 3) * 8) + loop_start; - - for (; cnt <= n_coeffs; cnt++) { - level = block[cnt]; - if (level) { - if (level < 0) { - level = level * qmul - qadd; - } else { - level = level * qmul + qadd; - } - block[cnt] = level; - } - } -} - -static int32_t mpeg2_dct_unquantize_inter_msa(int16_t *block, - int32_t qscale, - const int16_t *quant_matrix) -{ - int32_t cnt, sum_res = -1; - v8i16 block_vec, block_neg, qscale_vec, mask; - v8i16 block_org0, block_org1, block_org2, block_org3; - v8i16 quant_m0, quant_m1, quant_m2, quant_m3; - v8i16 sum, mul, zero_mask; - v4i32 mul_vec, qscale_l, qscale_r, quant_m_r, quant_m_l; - v4i32 block_l, block_r, sad; - - qscale_vec = __msa_fill_h(qscale); - for (cnt = 0; cnt < 2; cnt++) { - LD_SH4(block, 8, block_org0, block_org1, block_org2, block_org3); - LD_SH4(quant_matrix, 8, quant_m0, quant_m1, quant_m2, quant_m3); - mask = __msa_clti_s_h(block_org0, 0); - zero_mask = __msa_ceqi_h(block_org0, 0); - block_neg = -block_org0; - block_vec = (v8i16) __msa_bmnz_v((v16u8) block_org0, (v16u8) block_neg, - (v16u8) mask); - block_vec <<= 1; - block_vec += 1; - UNPCK_SH_SW(block_vec, block_r, block_l); - UNPCK_SH_SW(qscale_vec, qscale_r, qscale_l); - UNPCK_SH_SW(quant_m0, quant_m_r, quant_m_l); - mul_vec = block_l * qscale_l; - mul_vec *= quant_m_l; - block_l = mul_vec >> 4; - mul_vec = block_r * qscale_r; - mul_vec *= quant_m_r; - block_r = mul_vec >> 4; - mul = (v8i16) __msa_pckev_h((v8i16) block_l, (v8i16) block_r); - block_neg = - mul; - sum = (v8i16) __msa_bmnz_v((v16u8) mul, (v16u8) block_neg, - (v16u8) mask); - sum = (v8i16) __msa_bmnz_v((v16u8) sum, (v16u8) block_org0, - (v16u8) zero_mask); - ST_SH(sum, block); - block += 8; - quant_matrix += 8; - sad = __msa_hadd_s_w(sum, sum); - sum_res += HADD_SW_S32(sad); - mask = __msa_clti_s_h(block_org1, 0); - zero_mask = __msa_ceqi_h(block_org1, 0); - block_neg = - block_org1; - block_vec = (v8i16) __msa_bmnz_v((v16u8) block_org1, (v16u8) block_neg, - (v16u8) mask); - block_vec <<= 1; - block_vec += 1; - UNPCK_SH_SW(block_vec, block_r, block_l); - UNPCK_SH_SW(qscale_vec, qscale_r, qscale_l); - UNPCK_SH_SW(quant_m1, quant_m_r, quant_m_l); - mul_vec = block_l * qscale_l; - mul_vec *= quant_m_l; - block_l = mul_vec >> 4; - mul_vec = block_r * qscale_r; - mul_vec *= quant_m_r; - block_r = mul_vec >> 4; - mul = __msa_pckev_h((v8i16) block_l, (v8i16) block_r); - block_neg = - mul; - sum = (v8i16) __msa_bmnz_v((v16u8) mul, (v16u8) block_neg, - (v16u8) mask); - sum = (v8i16) __msa_bmnz_v((v16u8) sum, (v16u8) block_org1, - (v16u8) zero_mask); - ST_SH(sum, block); - - block += 8; - quant_matrix += 8; - sad = __msa_hadd_s_w(sum, sum); - sum_res += HADD_SW_S32(sad); - mask = __msa_clti_s_h(block_org2, 0); - zero_mask = __msa_ceqi_h(block_org2, 0); - block_neg = - block_org2; - block_vec = (v8i16) __msa_bmnz_v((v16u8) block_org2, (v16u8) block_neg, - (v16u8) mask); - block_vec <<= 1; - block_vec += 1; - UNPCK_SH_SW(block_vec, block_r, block_l); - UNPCK_SH_SW(qscale_vec, qscale_r, qscale_l); - UNPCK_SH_SW(quant_m2, quant_m_r, quant_m_l); - mul_vec = block_l * qscale_l; - mul_vec *= quant_m_l; - block_l = mul_vec >> 4; - mul_vec = block_r * qscale_r; - mul_vec *= quant_m_r; - block_r = mul_vec >> 4; - mul = __msa_pckev_h((v8i16) block_l, (v8i16) block_r); - block_neg = - mul; - sum = (v8i16) __msa_bmnz_v((v16u8) mul, (v16u8) block_neg, - (v16u8) mask); - sum = (v8i16) __msa_bmnz_v((v16u8) sum, (v16u8) block_org2, - (v16u8) zero_mask); - ST_SH(sum, block); - - block += 8; - quant_matrix += 8; - sad = __msa_hadd_s_w(sum, sum); - sum_res += HADD_SW_S32(sad); - mask = __msa_clti_s_h(block_org3, 0); - zero_mask = __msa_ceqi_h(block_org3, 0); - block_neg = - block_org3; - block_vec = (v8i16) __msa_bmnz_v((v16u8) block_org3, (v16u8) block_neg, - (v16u8) mask); - block_vec <<= 1; - block_vec += 1; - UNPCK_SH_SW(block_vec, block_r, block_l); - UNPCK_SH_SW(qscale_vec, qscale_r, qscale_l); - UNPCK_SH_SW(quant_m3, quant_m_r, quant_m_l); - mul_vec = block_l * qscale_l; - mul_vec *= quant_m_l; - block_l = mul_vec >> 4; - mul_vec = block_r * qscale_r; - mul_vec *= quant_m_r; - block_r = mul_vec >> 4; - mul = __msa_pckev_h((v8i16) block_l, (v8i16) block_r); - block_neg = - mul; - sum = (v8i16) __msa_bmnz_v((v16u8) mul, (v16u8) block_neg, - (v16u8) mask); - sum = (v8i16) __msa_bmnz_v((v16u8) sum, (v16u8) block_org3, - (v16u8) zero_mask); - ST_SH(sum, block); - - block += 8; - quant_matrix += 8; - sad = __msa_hadd_s_w(sum, sum); - sum_res += HADD_SW_S32(sad); - } - - return sum_res; -} - -void ff_dct_unquantize_h263_intra_msa(MpegEncContext *s, - int16_t *block, int32_t index, - int32_t qscale) -{ - int32_t qmul, qadd; - int32_t nCoeffs; - - av_assert2(s->block_last_index[index] >= 0 || s->h263_aic); - - qmul = qscale << 1; - - if (!s->h263_aic) { - block[0] *= index < 4 ? s->y_dc_scale : s->c_dc_scale; - qadd = (qscale - 1) | 1; - } else { - qadd = 0; - } - if (s->ac_pred) - nCoeffs = 63; - else - nCoeffs = s->inter_scantable.raster_end[s->block_last_index[index]]; - - h263_dct_unquantize_msa(block, qmul, qadd, nCoeffs, 1); -} - -void ff_dct_unquantize_h263_inter_msa(MpegEncContext *s, - int16_t *block, int32_t index, - int32_t qscale) -{ - int32_t qmul, qadd; - int32_t nCoeffs; - - av_assert2(s->block_last_index[index] >= 0); - - qadd = (qscale - 1) | 1; - qmul = qscale << 1; - - nCoeffs = s->inter_scantable.raster_end[s->block_last_index[index]]; - - h263_dct_unquantize_msa(block, qmul, qadd, nCoeffs, 0); -} - -void ff_dct_unquantize_mpeg2_inter_msa(MpegEncContext *s, - int16_t *block, int32_t index, - int32_t qscale) -{ - const uint16_t *quant_matrix; - int32_t sum = -1; - - quant_matrix = s->inter_matrix; - - sum = mpeg2_dct_unquantize_inter_msa(block, qscale, quant_matrix); - - block[63] ^= sum & 1; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Frag Pro Shooter 3.5.1 Mod Apk with Unlimited Money and Gems.md b/spaces/congsaPfin/Manga-OCR/logs/Download Frag Pro Shooter 3.5.1 Mod Apk with Unlimited Money and Gems.md deleted file mode 100644 index ed63659b66520118ba8008438a8d00f8de4af7de..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Frag Pro Shooter 3.5.1 Mod Apk with Unlimited Money and Gems.md +++ /dev/null @@ -1,83 +0,0 @@ -
            -

            FRAG Pro Shooter 3.5.1 Mod Apk: A Free PvP Hero Game with Unlimited Features

            -

            If you are looking for a free PvP hero game that is fast, fun, and competitive, then you should try FRAG Pro Shooter. This game is developed by Oh BiBi, a French studio that specializes in FPS and TPS games. In this game, you can choose from over 80 heroes, each with their own unique skills and abilities, and form a squad of five to enter the arena and battle against other players from around the world. You can also customize your heroes with skins, weapons, and accessories, and upgrade them to make them more powerful. In this article, we will tell you everything you need to know about FRAG Pro Shooter, and how you can download and install the latest mod apk version that gives you unlimited money, diamonds, and god mode.

            -

            What is FRAG Pro Shooter?

            -

            FRAG Pro Shooter is a game that combines elements of FPS and TPS genres, as well as MOBA and card games. It is designed to be easy to play, but hard to master, as you have to switch between your heroes strategically and use their skills wisely. Here are some of the features that make FRAG Pro Shooter a great game:

            -

            frag pro shooter 3.5.1 mod apk


            Download Zip 🗸 https://urlca.com/2uO685



            -

            A fast-paced and fun shooter game

            -

            In FRAG Pro Shooter, you can experience the thrill of shooting and dodging bullets in real-time, as you control your hero with simple and intuitive controls. You can also switch between your heroes at any time, depending on the situation and your strategy. The game has stunning graphics and animations, as well as smooth gameplay and responsive controls.

            -

            A diverse and customizable hero roster

            -

            One of the best things about FRAG Pro Shooter is the variety of heroes you can choose from. There are over 80 heroes in the game, each with their own personality, backstory, role, and skill set. You can find heroes that suit your playstyle, whether you prefer sniping, melee, stealth, support, or anything in between. You can also customize your heroes with different skins, weapons, and accessories, as well as upgrade them with cards to improve their stats and abilities.

            -

            A competitive and social game mode

            -

            FRAG Pro Shooter is not only a solo game, but also a team game. You can join or create a club with other players, chat with them, share tips and strategies, and challenge them to friendly matches. You can also compete in ranked matches and tournaments to climb the leaderboard and earn rewards. The game also has a spectator mode, where you can watch other players' matches live or replay them later.

            -

            What is FRAG Pro Shooter 3.5.1 Mod Apk?

            -

            FRAG Pro Shooter 3.5.1 Mod Apk is a modified version of the original game that gives you some extra features that are not available in the official version. These features include:

            -

            A way to unlock unlimited money, diamonds, and god mode

            -

            Money and diamonds are the main currencies in FRAG Pro Shooter, which you can use to buy new heroes, skins, weapons, accessories, cards, chests, and more. However, earning them can be slow and tedious, especially if you want to unlock everything in the game. With FRAG Pro Shooter 3.5.1 Mod Apk, you can get unlimited money and diamonds by using the mod apk. This way, you can buy and upgrade anything you want without worrying about the cost. You can also enjoy the god mode feature, which makes you invincible and immune to any damage. This can help you win every match easily and have more fun.

            -

            A safe and easy to install apk file

            -

            Some people may be worried about the safety and compatibility of the mod apk file, but there is no need to be. The FRAG Pro Shooter 3.5.1 Mod Apk file is tested and verified by many users, and it is free from any viruses or malware. It is also compatible with most Android devices, as long as they have Android 4.3 or higher. The installation process is also very simple and straightforward, as we will explain in the next section.

            -

            How to download and install FRAG Pro Shooter 3.5.1 Mod Apk?

            -

            If you want to download and install FRAG Pro Shooter 3.5.1 Mod Apk on your device, you just need to follow these steps:

            -

            frag pro shooter mod apk unlimited money and gems 3.5.1
            -frag pro shooter hack mod apk download 3.5.1
            -frag pro shooter mod menu apk 3.5.1 latest version
            -frag pro shooter 3.5.1 mod apk android 1
            -frag pro shooter mod apk 3.5.1 rexdl
            -frag pro shooter mod apk 3.5.1 revdl
            -frag pro shooter mod apk 3.5.1 unlimited diamonds
            -frag pro shooter mod apk 3.5.1 god mode
            -frag pro shooter mod apk 3.5.1 no root
            -frag pro shooter mod apk 3.5.1 offline
            -frag pro shooter mod apk 3.5.1 free shopping
            -frag pro shooter mod apk 3.5.1 all characters unlocked
            -frag pro shooter mod apk 3.5.1 anti ban
            -frag pro shooter mod apk 3.5.1 mega mod
            -frag pro shooter mod apk 3.5.1 vip unlocked
            -frag pro shooter mod apk 3.5.1 premium unlocked
            -frag pro shooter mod apk 3.5.1 unlimited ammo
            -frag pro shooter mod apk 3.5.1 one hit kill
            -frag pro shooter mod apk 3.5.1 high damage
            -frag pro shooter mod apk 3.5.1 auto aim
            -frag pro shooter mod apk 3.5.1 wall hack
            -frag pro shooter mod apk 3.5.1 radar hack
            -frag pro shooter mod apk 3.5.1 speed hack
            -frag pro shooter mod apk 3.5.1 no ads
            -frag pro shooter mod apk 3.5.1 no cooldown
            -frag pro shooter mod apk 3.5.1 unlimited energy
            -frag pro shooter mod apk 3.5.1 unlimited coins
            -frag pro shooter mod apk 3.5.1 unlimited gold
            -frag pro shooter mod apk 3.5.1 unlimited cards
            -frag pro shooter mod apk 3.5.1 unlimited chests
            -frag pro shooter mod apk 3.5.1 unlimited tickets
            -frag pro shooter mod apk 3.5.1 unlimited fragments
            -frag pro shooter mod apk 3.5.1 unlimited stars
            -frag pro shooter mod apk 3.5.1 unlimited trophies
            -frag pro shooter mod apk 3.5.1 unlimited season pass
            -frag pro shooter mod apk 3.5.1 unlocked everything
            -frag pro shooter mod apk 3.5.1 latest update download
            -download frag pro shooter mod apk version 3..51 for android
            -how to install frag pro shooter mod apk 3..51 on android device
            -how to play frag pro shooter with friends using mod apk version 3..51

            -

            Step 1: Download the apk file from a trusted source

            -

            The first thing you need to do is to download the apk file from a reliable and secure source. You can use the link below to download the FRAG Pro Shooter 3.5.1 Mod Apk file directly to your device.

            -

            FRAG Pro Shooter 3.5.1 Mod Apk Download

            -

            Step 2: Enable unknown sources on your device

            -

            The next thing you need to do is to enable unknown sources on your device, which will allow you to install apps from sources other than the Google Play Store. To do this, you need to go to your device settings, then security, then unknown sources, and toggle it on.

            -

            Step 3: Install the apk file and launch the game

            -

            The final thing you need to do is to install the apk file and launch the game. To do this, you need to locate the downloaded apk file on your device, tap on it, and follow the instructions on the screen. Once the installation is complete, you can open the game and enjoy the mod features.

            -

            Conclusion

            -

            FRAG Pro Shooter is a free PvP hero game that is fast, fun, and competitive. You can choose from over 80 heroes, customize them, and form a squad of five to battle against other players online. You can also join or create a club, chat with other players, and participate in ranked matches and tournaments. If you want to have more fun and advantages in the game, you can download and install FRAG Pro Shooter 3.5.1 Mod Apk, which gives you unlimited money, diamonds, and god mode. This mod apk is safe and easy to install, as long as you follow the steps we provided in this article.

            -

            FAQs

            -

            Here are some of the frequently asked questions about FRAG Pro Shooter 3.5.1 Mod Apk:

            -

            Q: Is FRAG Pro Shooter 3.5.1 Mod Apk free?

            -

            A: Yes, FRAG Pro Shooter 3.5.1 Mod Apk is free to download and use.

            -

            Q: Is FRAG Pro Shooter 3.5.1 Mod Apk safe?

            -

            A: Yes, FRAG Pro Shooter 3.5.1 Mod Apk is safe and virus-free.

            -

            Q: Do I need to root my device to use FRAG Pro Shooter 3.5.1 Mod Apk?

            -

            A: No, you do not need to root your device to use FRAG Pro Shooter 3.5.1 Mod Apk.

            -

            Q: Can I play FRAG Pro Shooter online with other players using FRAG Pro Shooter 3.5.1 Mod Apk?

            -

            A: Yes, you can play FRAG Pro Shooter online with other players using FRAG Pro Shooter 3.5.1 Mod Apk.

            -

            Q: Will I get banned for using FRAG Pro Shooter 3.5.1 Mod Apk?

            -

            A: No, you will not get banned for using FRAG Pro Shooter 3.5.1 Mod Apk.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download PUBG MOBILE Arcane Mod Apk with Unlimited Money and Diamonds.md b/spaces/congsaPfin/Manga-OCR/logs/Download PUBG MOBILE Arcane Mod Apk with Unlimited Money and Diamonds.md deleted file mode 100644 index 796cd6f49c21295fb9151ed51c5e10e93043209c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download PUBG MOBILE Arcane Mod Apk with Unlimited Money and Diamonds.md +++ /dev/null @@ -1,133 +0,0 @@ - -

            PUBG Mobile Mod APK Unlimited Money: How to Download and Play

            -

            PUBG Mobile is one of the most popular and addictive games in the world. It is a battle royale game where you have to survive against 99 other players on an island. You can choose from various modes, maps, weapons, vehicles, and skins to customize your gameplay. You can also team up with your friends and communicate with them using voice chat. PUBG Mobile is a free-to-play game, but it also has in-game purchases that require real money. If you want to get unlimited money and other benefits in PUBG Mobile, you might be interested in PUBG Mobile Mod APK. In this article, we will tell you what PUBG Mobile Mod APK is, how to download and install it, and how to play it.

            -

            pubg mobile mod apk unlimited money


            DOWNLOADhttps://urlca.com/2uObFA



            -

            What is PUBG Mobile?

            -

            A brief introduction to the game and its features

            -

            PUBG Mobile is a mobile version of PlayerUnknown's Battlegrounds, a multiplayer online battle royale game developed by PUBG Corporation. The game was released in 2018 and has since become one of the most downloaded and played games on mobile devices. The game has over 100 million active users and has won several awards, including the Google Play Best Game of 2018.

            -

            The game is based on the concept of "last man standing". You have to parachute onto an island with 99 other players and scavenge for weapons, armor, ammo, and other items. You have to fight your way through the enemies and avoid the shrinking safe zone. The last player or team alive wins the match.

            -

            The game offers various modes, such as classic, arcade, arena, payload, infection, etc. You can also choose from different maps, such as Erangel, Miramar, Sanhok, Vikendi, Livik, etc. You can also customize your character's appearance, outfits, weapons, vehicles, etc. with skins and items that you can buy with in-game currency or real money.

            -

            pubg mobile arcane mod apk 2.2.3 unlimited money[^1^]
            -pubg mobile hack mod apk download unlimited uc
            -pubg mobile lite mod apk latest version unlimited health
            -pubg mobile kr mod apk free download unlimited everything
            -pubg mobile global mod apk 1.6.0 unlimited ammo
            -pubg mobile mod menu apk no root unlimited bp
            -pubg mobile esp mod apk anti ban unlimited coins
            -pubg mobile vn mod apk 2021 unlimited diamonds
            -pubg mobile india mod apk obb unlimited skins
            -pubg mobile tw mod apk revdl unlimited crates
            -pubg mobile zombie mode mod apk rexdl unlimited cash
            -pubg mobile chinese version mod apk android 1 unlimited weapons
            -pubg mobile korean version mod apk ios unlimited rp
            -pubg mobile beta version mod apk happymod unlimited cars
            -pubg mobile new era mod apk pure unlimited outfits
            -pubg mobile metro royale mod apk platinmods unlimited grenades
            -pubg mobile 2nd anniversary mod apk andropalace unlimited keys
            -pubg mobile new state mod apk apkpure unlimited gold
            -pubg mobile 90 fps mod apk apkmody unlimited silver
            -pubg mobile 120 fps mod apk apkmirror unlimited gems
            -pubg mobile 1.5 update mod apk an1 unlimited energy
            -pubg mobile mission ignition mod apk aptoide unlimited fuel
            -pubg mobile project t mod apk blackmod unlimited food
            -pubg mobile tesla factory mod apk bluestacks unlimited materials
            -pubg mobile erangel 2.0 mod apk by kmods unlimited resources
            -pubg mobile livik map mod apk clubapk unlimited points
            -pubg mobile miramar map mod apk dlandroid unlimited tickets
            -pubg mobile sanhok map mod apk ihackedit unlimited tokens
            -pubg mobile vikendi map mod apk lenov.ru unlimited coupons
            -pubg mobile karakin map mod apk mob.org unlimited vouchers
            -pubg mobile payload mode 2.0 mod apk onhax unlimited lives
            -pubg mobile power armor mode mod apk panda helper unlimited shields
            -pubg mobile arctic mode mod apk techylist unlimited drones
            -pubg mobile ragegear mode mod apk uptodown unlimited nitro
            -pubg mobile infection mode mod apk xda developers unlimited time
            -pubg mobile tdm mode mod apk ytricks.net unlimited kills
            -how to install pubg mobile mod apk with obb file for android device with unlimited money hack
            -how to download and play pubg mobile lite india version with mega mod menu and free uc generator online no verification required for ios device with jailbreak and cydia impactor tool installed on pc or mac computer with windows or linux operating system and internet connection available for free without any survey or human verification required for iphone ipad ipod touch users with apple id and password ready to use for gaming purposes only not for commercial use or illegal activities disclaimer use at your own risk we are not responsible for any damage or loss caused by using this app or website or service or product or software or program or tool or method or technique or trick or tip or guide or tutorial or instruction or information or advice or suggestion or recommendation or solution or answer or explanation or example or demonstration or illustration or clarification or confirmation or verification or validation or justification or rationalization or reason or logic or argument or evidence or proof or fact or data or statistic or number or figure or symbol or sign or mark or indicator or pointer or marker or signal or cue or clue or hint or tip-off or lead-in

            -

            What is PUBG Mobile Mod APK?

            -

            A modified version of the game that gives unlimited money and other benefits

            -

            PUBG Mobile Mod APK is a modified version of the original PUBG Mobile game that gives you unlimited money and other benefits. With this mod apk file, you can get unlimited UC (Unknown Cash), BP (Battle Points), diamonds, coins, etc. You can also unlock all the skins, items, weapons, vehicles, etc. that are otherwise paid or hard to get. You can also get features like aimbot, wallhack, speed hack, no recoil, etc. that give you an edge over your enemies.

            -

            Benefits of PUBG Mobile Mod APK

            -

            Some of the benefits of using PUBG Mobile Mod APK are:

            -
              -
            • You can get unlimited money and resources that you can use to buy anything you want in the game.
            • -
            • You can unlock all the skins, items, weapons, vehicles, etc. that are otherwise paid or hard to get.
            • -
            • You can get features like aimbot, wallhack, speed hack, no recoil, etc. that give you an edge over your enemies.
            • -
            • You can enjoy the game with more fun and excitement.
            • -
            • You can save your time and effort by not having to grind for resources or complete missions.
            • -
            -

            Risks of PUBG Mobile Mod APK

            -

            Some of the risks of using PUBG Mobile Mod APK are:

            -
              -
            • You might get banned from the game if the developers detect that you are using a mod apk file.
            • -
            • You might get malware or viruses on your device if you download the mod apk file from an untrusted source.
            • -
            • You might lose your progress and data if the mod apk file is not compatible with the latest version of the game.
            • -
            • You might face legal issues if you violate the terms and conditions of the game by using a mod apk file.
            • -
            -

            How to Download and Install PUBG Mobile Mod APK?

            -

            A step-by-step guide to download and install the mod apk file on your device

            -

            If you want to download and install PUBG Mobile Mod APK on your device, you need to follow these steps:

            -

            Requirements for PUBG Mobile Mod APK

            -

            Before you download and install PUBG Mobile Mod APK, you need to make sure that your device meets these requirements:

            -
              -
            • Your device should have Android 4.3 or higher version.
            • -
            • Your device should have at least 2 GB of RAM and 3 GB of free storage space.
            • -
            • Your device should have a stable internet connection.
            • -
            • Your device should allow installation from unknown sources. You can enable this option by going to Settings > Security > Unknown Sources.
            • -
            -

            Sources for PUBG Mobile Mod APK

            -

            There are many websites that claim to provide PUBG Mobile Mod APK files, but not all of them are safe and reliable. You need to be careful and choose a trusted source that has positive reviews and ratings from other users. Some of the sources that we recommend are:

            -
              -
            • [APKPure]: This is a popular website that offers various mod apk files for different games and apps. You can download PUBG Mobile Mod APK from this website by clicking on the download button and following the instructions.
            • -
            • [APKDone]: This is another website that provides mod apk files for various games and apps. You can download PUBG Mobile Mod APK from this website by clicking on the download button and following the instructions.
            • -
            • [ModDroid]: This is a website that specializes in mod apk files for Android games. You can download PUBG Mobile Mod APK from this website by clicking on the download button and following the instructions.
            • -
            -

            Installation process for PUBG Mobile Mod APK

            -

            After you have downloaded the PUBG Mobile Mod APK file from a trusted source, you need to install it on your device. You can follow these steps to install PUBG Mobile Mod APK:

            -
              -
            1. Locate the downloaded PUBG Mobile Mod APK file on your device's file manager or downloads folder.
            2. -
            3. Tap on the file and select Install. You might see a warning message that says "This type of file can harm your device". Ignore it and tap on OK.
            4. -
            5. Wait for the installation process to complete. It might take a few minutes depending on your device's performance.
            6. -
            7. Once the installation is done, you will see a message that says "App installed". Tap on Open to launch the game.
            8. -
            -

            How to Play PUBG Mobile Mod APK?

            -

            A brief overview of how to start and enjoy the game with unlimited money and other features

            -

            Now that you have installed PUBG Mobile Mod APK on your device, you are ready to play the game with unlimited money and other features. Here is how you can start and enjoy the game:

            -

            Tips and tricks for PUBG Mobile Mod APK

            -

            Some of the tips and tricks that you can use to play PUBG Mobile Mod APK are:

            -
              -
            • You can use the unlimited money to buy any skin, item, weapon, vehicle, etc. that you want in the game. You can also upgrade your weapons and vehicles with ease.
            • -
            • You can use the aimbot feature to automatically aim at your enemies and shoot them with accuracy. You can also use the wallhack feature to see through walls and spot your enemies easily.
            • -
            • You can use the speed hack feature to move faster than your enemies and escape from danger. You can also use the no recoil feature to reduce the recoil of your weapons and shoot more smoothly.
            • -
            • You can use the voice chat feature to communicate with your teammates and coordinate your strategies. You can also use the voice changer feature to disguise your voice and prank your enemies.
            • -
            • You can use the anti-ban feature to avoid getting banned from the game. You can also use the VPN feature to change your location and access different servers.
            • -
            -

            Precautions and warnings for PUBG Mobile Mod APK

            -

            Some of the precautions and warnings that you should follow when playing PUBG Mobile Mod APK are:

            -
              -
            • You should not use the mod apk file on your main account or device. You should create a new account and use a spare device to play the game.
            • -
            • You should not abuse the mod apk features or use them excessively. You should play the game normally and use the features only when necessary.
            • -
            • You should not brag or boast about using the mod apk file or show off your unlimited money and other benefits. You should keep a low profile and avoid attracting attention from other players or the developers.
            • -
            • You should not download or install the mod apk file from an untrusted source or without scanning it for malware or viruses. You should also update the mod apk file regularly to ensure its compatibility and safety.
            • -
            -

            Conclusion

            -

            A summary of the main points and a call to action

            -

            PUBG Mobile Mod APK is a modified version of the original PUBG Mobile game that gives you unlimited money and other benefits. You can download and install it on your device by following the steps mentioned above. You can also play it with ease by using the tips and tricks mentioned above. However, you should also be aware of the risks and precautions involved in using PUBG Mobile Mod APK. You should use it at your own risk and responsibility.

            -

            If you are looking for a way to enjoy PUBG Mobile with more fun and excitement, you can try PUBG Mobile Mod APK. However, if you want to play the game fairly and legitimately, you should stick to the original PUBG Mobile game. The choice is yours.

            -

            FAQs

            -

            Some of the frequently asked questions about PUBG Mobile Mod APK are:

            -
              -
            • Q: Is PUBG Mobile Mod APK legal?
            • -
            • A: No, PUBG Mobile Mod APK is not legal. It is a violation of the terms and conditions of the game and can result in legal action from the developers.
            • -
            • Q: Is PUBG Mobile Mod APK safe?
            • -
            • A: Not necessarily. PUBG Mobile Mod APK can be unsafe if you download it from an untrusted source or without scanning it for malware or viruses. It can also be unsafe if you use it on your main account or device.
            • -
            • Q: Is PUBG Mobile Mod APK free?
            • -
            • A: Yes, PUBG Mobile Mod APK is free. You do not have to pay any money to download or install it on your device.
            • -
            • Q: How can I update PUBG Mobile Mod APK?
            • -
            • A: You can update PUBG Mobile Mod APK by downloading and installing the latest version of the mod apk file from a trusted source. You should also check for updates regularly to ensure its compatibility and safety.
            • -
            • Q: How can I uninstall PUBG Mobile Mod APK?
            • -
            • A: You can uninstall PUBG Mobile Mod APK by going to Settings > Apps > PUBG Mobile > Uninstall. You can also delete the mod apk file from your device's file manager or downloads folder.
            • -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Score Hero Mod Apk Apptoko and Experience the Unique Gameplay of Score Hero.md b/spaces/congsaPfin/Manga-OCR/logs/Download Score Hero Mod Apk Apptoko and Experience the Unique Gameplay of Score Hero.md deleted file mode 100644 index 3661340f57771ee2e46ad51f4b2f3befd0235890..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Score Hero Mod Apk Apptoko and Experience the Unique Gameplay of Score Hero.md +++ /dev/null @@ -1,44 +0,0 @@ -
            -

            Download Score Hero Mod Apk Apptoko: A Guide for Football Fans

            -

            If you are a football fan who loves to play soccer games on your Android device, you might have heard of Score Hero. It is a popular game that lets you create your own soccer legend and experience a dramatic career with over 800 levels. But did you know that you can download a modded version of Score Hero from Apptoko, an alternative app store for Indonesian mobile users? In this article, we will tell you what Score Hero and Apptoko are, how to download Score Hero mod apk apptoko, and what are the advantages of doing so. Read on to find out more!

            -

            download score hero mod apk apptoko


            Download https://urlca.com/2uOg9i



            -

            What is Score Hero?

            -

            Score Hero is a soccer game developed by First Touch Games Ltd. It is different from other soccer games because it focuses on the story and the choices of your player, rather than the whole team. You can customize your hero's appearance, name, nationality, and club, and then guide him through various scenarios and challenges in each level. You can also win awards, trophies, medals, and glory by scoring goals, changing clubs, representing your country, and going for glory. The game features realistic 3D graphics, simple gameplay, and an immersive sound effects.

            -

            What is Apptoko?

            -

            Apptoko is an app store that offers a huge catalog of apps, games, ringtones, wallpapers, and even comics. It is designed mainly for Indonesian users, but it also supports other languages. Apptoko does not require you to create an account or root your device to download apps. It also has a fast downloading speed and can download multiple apps at the same time. Apptoko specializes in games, especially modified games that are not available on mainstream app stores. You can find games like Score Hero mod apk apptoko on this app store.

            -

            How to Download Score Hero Mod Apk Apptoko

            -

            To download Score Hero mod apk apptoko, you need to follow these steps:

            -
              -
            1. Go to Apptoko's website or download Apptoko's APK file from a trusted source.
            2. -
            3. Install Apptoko on your device by allowing unknown sources in your settings.
            4. -
            5. Open Apptoko and search for Score Hero mod apk in the search bar.
            6. -
            7. Select the game from the results and tap on the download button.
            8. -
            9. Wait for the download to finish and then install the game on your device.
            10. -
            11. Enjoy playing Score Hero mod apk apptoko!
            12. -
            -

            What are the Advantages of Downloading Score Hero Mod Apk Apptoko?

            -

            By downloading Score Hero mod apk apptoko, you can enjoy these features and benefits:

            -
              -
            • You can get unlimited money to buy items and upgrade your hero.
            • -
            • You can get unlimited energy to play as many levels as you want.
            • -
            • You can unlock all levels and modes without any restrictions.
            • -
            • You can customize your hero with more options and styles.
            • -
            • You can play offline without any internet connection.
            • -
            -

            Conclusion

            -

            Score

            Score Hero mod apk apptoko is a great way to enjoy the game with more freedom and fun. You can create your own soccer legend and experience a thrilling career with unlimited resources and options. You can also download the game from a reliable and fast app store that offers many other games and apps for free. If you are a football fan who loves to play soccer games on your Android device, you should definitely try Score Hero mod apk apptoko.

            -

            FAQs

            -

            Here are some frequently asked questions and answers about Score Hero and Apptoko:

            -

            Q: Is Score Hero mod apk apptoko safe to download and install?

            -

            A: Yes, Score Hero mod apk apptoko is safe to download and install, as long as you get it from Apptoko's official website or a trusted source. Apptoko is a reputable app store that scans all the apps and games for viruses and malware before uploading them. However, you should always be careful when downloading apps from unknown sources and check the permissions and reviews before installing them.

            -

            Q: Is Score Hero mod apk apptoko compatible with my device?

            -

            A: Score Hero mod apk apptoko is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not support the game or the modded features due to different specifications and settings. You can check the compatibility of your device by reading the game description and requirements on Apptoko's website or app.

            -

            -

            Q: How can I update Score Hero mod apk apptoko?

            -

            A: You can update Score Hero mod apk apptoko by following the same steps as downloading it. You can check for updates on Apptoko's website or app, or enable the automatic update option in your settings. However, you should be aware that updating the game may cause some issues or errors with the modded features, so you should backup your data before updating.

            -

            Q: How can I contact Apptoko's customer service?

            -

            A: You can contact Apptoko's customer service by visiting their website and filling out the contact form, or by sending them an email at support@apptoko.com. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, and YouTube, to get the latest news and updates about their app store.

            -

            Q: What are some other games that I can download from Apptoko?

            -

            A: Apptoko offers a huge catalog of games, especially modified games that are not available on mainstream app stores. You can find games from various genres, such as action, adventure, arcade, puzzle, racing, simulation, sports, strategy, and more. Some of the popular games that you can download from Apptoko are PUBG Mobile Lite Mod Apk, Subway Surfers Mod Apk, Clash of Clans Mod Apk, Minecraft Mod Apk, Dream League Soccer Mod Apk, and more.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Euchre online for free with stunning graphics and themes.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Euchre online for free with stunning graphics and themes.md deleted file mode 100644 index 53646820ad0990d848878ee7f7d83b09ab01bf45..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Euchre online for free with stunning graphics and themes.md +++ /dev/null @@ -1,107 +0,0 @@ - -

            Free Euchre Download for Windows 10: How to Play and Win this Classic Card Game

            -

            If you are looking for a fun and challenging card game to play with your friends or family, you should try euchre. Euchre is a classic trick-taking card game that is easy to learn but hard to master. You can play euchre online or offline on your Windows 10 device, and enjoy hours of entertainment and excitement. In this article, we will show you how to download and install euchre on your Windows 10 device, how to play and win euchre online or offline, and answer some frequently asked questions about euchre.

            -

            free euchre download for windows 10


            Download Zip ⇒⇒⇒ https://urlca.com/2uOcWy



            -

            What is Euchre and Why You Should Play It

            -

            Euchre is a card game that originated in Europe in the 18th century, and became popular in the United States in the 19th century. It is played by four players in two teams of two, using a modified deck of 24 cards (the aces, kings, queens, jacks, tens, and nines of each suit). The goal of the game is to win tricks by playing the highest card of the suit led or the highest card of the trump suit. The team that wins three or more tricks out of five in a round scores one or two points, depending on whether they made the trump suit or not. The first team to score 10 points wins the game.

            -

            The History and Popularity of Euchre

            -

            Euchre is believed to have evolved from a French game called écarté, which was played by Napoleon's army. It was brought to America by German immigrants, who called it juckerspiel. It became popular in the Midwest, especially in Michigan, Ohio, Indiana, Illinois, and Wisconsin. It is also played in Canada, Australia, New Zealand, and the United Kingdom. Euchre is considered one of the most social card games, as it encourages communication and cooperation between partners. It is also one of the most competitive card games, as it involves bluffing, strategy, and skill.

            -

            The Rules and Objectives of Euchre

            -

            The rules of euchre are relatively simple. Four players are divided into two teams of two, sitting across from each other. One player is designated as the dealer, who shuffles and deals five cards to each player clockwise. The remaining four cards are placed face down on the table, with the top card turned face up. This card determines the potential trump suit for the round. The player to the left of the dealer can either accept or reject the trump suit by saying \"I order it up\" or \"I pass\". If the player accepts the trump suit, their partner must discard one card from their hand and pick up the face-up card. If the player rejects the trump suit, the next player clockwise can either accept or reject it. This continues until either a player accepts the trump suit or all four players pass. If all four players pass, the face-up card is turned down and each player can choose a different suit as the trump suit or pass again. If all four players pass again, the cards are reshuffled and dealt by the next player clockwise.

            -

            Once the trump suit is determined, the player to the left of the dealer leads the first trick by playing any card from their hand. The other players must follow suit if they can, or play any other card if they cannot. The highest card of the suit led or the highest card of the trump suit wins the trick. The winner of the trick leads the next trick, and so on, until all five tricks are played. The team that wins three or more tricks scores one point if they made the trump suit, or two points if they did not. If the team that made the trump suit fails to win three tricks, they are euchred and the other team scores two points. The game continues until one team reaches 10 points.

            -

            The Benefits of Playing Euchre

            -

            Playing euchre is not only fun, but also good for your brain and your social skills. Euchre can help you improve your memory, concentration, logic, and problem-solving abilities, as you have to remember the cards played, keep track of the score, and plan your moves. Euchre can also help you develop your communication, teamwork, and sportsmanship skills, as you have to work with your partner, signal your intentions, and respect your opponents. Euchre can also help you relax, reduce stress, and make new friends, as you can play euchre online or offline with people from different backgrounds and interests.

            -

            free euchre online game for windows 10
            -free euchre card game app for windows 10
            -free euchre offline game download for windows 10
            -free euchre multiplayer game for windows 10
            -free euchre game with smart AI for windows 10
            -free euchre game with custom backgrounds for windows 10
            -free euchre game with special rings and rewards for windows 10
            -free euchre game with six themed tables for windows 10
            -free euchre game with amazing HD graphics for windows 10
            -free euchre game with single player and multiplayer modes for windows 10
            -free euchre game with facebook integration for windows 10
            -free euchre game with bonus gold every few hours for windows 10
            -free euchre game with classic trick-taking card play for windows 10
            -free euchre game with challenging AI opponents for windows 10
            -free euchre game with tournaments and leaderboard for windows 10
            -free euchre game with quests and items for windows 10
            -free euchre game with gifts and chat for windows 10
            -free euchre game with simulated gambling and digital purchases for windows 10
            -free euchre game with different levels of mastery for windows 10
            -free euchre game with knock euchre variant for windows 10
            -download hardwood euchre for windows 10 for free
            -download hardwood euchre card game for windows 10 for free
            -download hardwood euchre online game for windows 10 for free
            -download hardwood euchre offline game for windows 10 for free
            -download hardwood euchre multiplayer game for windows 10 for free
            -download hardwood euchre single player game for windows 10 for free
            -download hardwood euchre with customizable backgrounds, cards, avatars and tables for windows 10 for free
            -download hardwood euchre with paid downloadable content for windows 10 for free
            -download hardwood euchre with excellent graphics and sound effects for windows 10 for free
            -download hardwood euchre with smooth gameplay and animation for windows 10 for free
            -get euchre online from microsoft store for windows 10 for free
            -get euchre online card game from microsoft store for windows 10 for free
            -get euchre online multiplayer mode from microsoft store for windows 10 for free
            -get euchre online single player mode from microsoft store for windows 10 for free
            -get euchre online six themed tables from microsoft store for windows 10 for free
            -get euchre online special rings and rewards from microsoft store for windows 10 for free
            -get euchre online quests and items from microsoft store for windows 10 for free
            -get euchre online gifts and chat from microsoft store for windows 10 for free
            -get euchre online simulated gambling and digital purchases from microsoft store for windows 10 for free
            -get euchre online classic trick-taking card play from microsoft store for windows 10 for free.

            -

            How to Download and Install Euchre on Your Windows 10 Device

            -

            If you want to play euchre on your Windows 10 device, you have several options to choose from. You can download and install euchre games from the Microsoft Store, from third-party websites, or from online platforms. Here are some of the best euchre games for Windows 10 and how to download and install them.

            -

            The Best Euchre Games for Windows 10

            -

            There are many euchre games available for Windows 10, but some of them stand out for their quality, features, and reviews. Here are some of the best euchre games for Windows 10 that you can try:

            - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            NameDescriptionRating
            Euchre Free: Classic Card GameA free euchre game that lets you play online or offline with customizable rules, difficulty levels, and graphics. You can also chat with other players and track your statistics.4.5/5 stars
            Euchre 3DA realistic euchre game that features 3D graphics, animations, sound effects, and voice overs. You can play online or offline with smart AI opponents or real players from around the world.4.4/5 stars
            Euchre OnlineA multiplayer euchre game that connects you with thousands of players online. You can join tournaments, leagues, clubs, and chat rooms, and earn coins and rewards.4.3/5 stars
            Euchre by NeuralPlayA challenging euchre game that uses advanced artificial intelligence to provide you with a realistic and fun experience. You can play offline or online with different modes, rules, and settings.4.2/5 stars
            Euchre GoldA premium euchre game that offers you a smooth and elegant gameplay. You can play offline or online with various options, themes, and backgrounds.4.1/5 stars
            -

            The Steps to Download and Install Euchre

            The steps to download and install euchre on your Windows 10 device depend on the source of the game. Here are the general steps for each source:

            - - Microsoft Store: You can download and install euchre games from the Microsoft Store by following these steps: - Open the Microsoft Store app on your Windows 10 device. - Search for the euchre game you want to download, such as Euchre Free: Classic Card Game, Euchre 3D, or Euchre Online. - Click on the game and then click on the Get or Install button. - Wait for the game to download and install on your device. - Launch the game and enjoy playing euchre. - Third-party websites: You can download and install euchre games from third-party websites by following these steps: - Go to the website of the euchre game you want to download, such as Euchre by NeuralPlay or Euchre Gold. - Click on the Download or Buy Now button and follow the instructions to complete the payment if required. - Save the game file on your device and run it to install the game. - Launch the game and enjoy playing euchre. - Online platforms: You can play euchre games online without downloading or installing anything by following these steps: - Go to the website of the online platform that offers euchre games, such as CardzMania, Trickster Cards, or Pogo. - Sign up for a free account or log in with your existing account if you have one. - Choose euchre from the list of games and join a table or create your own. - Play euchre online with other players or bots.

            The Features and Options of Euchre Games

            -

            Euchre games for Windows 10 offer you various features and options to enhance your gameplay and customize your preferences. Some of the common features and options are:

            -- Online or offline mode: You can choose to play euchre online with real players from around the world, or offline with smart AI opponents or local players on your device. - Customizable rules: You can adjust the rules of euchre to suit your style and level, such as changing the scoring system, the number of points to win, the trump suit selection, the stick-the-dealer option, and more. - Difficulty levels: You can select the difficulty level of your opponents, from easy to hard, depending on how challenging you want the game to be. - Graphics and sound: You can change the graphics and sound settings of the game, such as choosing different card backs, table backgrounds, themes, animations, sound effects, and voice overs. - Statistics and chat: You can view your statistics and achievements, such as your win rate, your rank, your coins, and your rewards. You can also chat with other players and make new friends. - Tournaments and leagues: You can join tournaments and leagues to compete with other players and win prizes and trophies. You can also create your own tournaments and leagues and invite your friends to join.

            How to Play and Win Euchre Online or Offline

            -

            Playing euchre online or offline is a great way to have fun and challenge yourself. However, if you want to win more often and impress your opponents, you need to learn some strategies and tips for euchre. Here are some of the basic and advanced techniques and tricks for euchre, as well as some common mistakes and pitfalls to avoid.

            -

            The Basic Strategies and Tips for Euchre

            -

            Some of the basic strategies and tips for euchre are:

            -- Know your partner: You should communicate with your partner and try to understand their style, signals, and preferences. You should also support your partner and trust their decisions. - Know your cards: You should memorize the cards in the deck, especially the trump cards, the bowers, and the aces. You should also keep track of the cards that have been played and the cards that are left in the deck. - Know your position: You should consider your position at the table, whether you are the dealer, the maker, the leader, or the follower. Your position affects your choices and actions in the game. - Know when to order up: You should order up the trump suit if you have a strong hand, such as three or more trump cards, a bower, or an ace. You should also order up if you want to prevent your opponents from making the trump suit or if you want to go alone. - Know when to pass: You should pass the trump suit if you have a weak hand, such as no trump cards, no aces, or no face cards. You should also pass if you want to wait for a better suit or if you want to bluff your opponents. - Know when to go alone: You should go alone if you have a very strong hand, such as four or five trump cards, both bowers, or an ace. You should also go alone if you are confident that you can win all five tricks or if you need to catch up with your opponents.

            The Advanced Techniques and Tricks for Euchre

            -

            Some of the advanced techniques and tricks for euchre are:

            -- Counting cards: You should count the cards that have been played and the cards that are left in the deck. This will help you estimate the probabilities of winning or losing a trick, as well as plan your moves ahead. - Signaling cards: You should signal your partner with subtle clues about your hand, such as playing a high card to show strength, playing a low card to show weakness, or playing a suit to show preference. You should also watch out for your opponents' signals and try to decode them. - Bluffing cards: You should bluff your opponents with deceptive moves, such as playing a high card to show weakness, playing a low card to show strength, or playing a suit to show indifference. You should also watch out for your opponents' bluffs and try to expose them. - Stealing the deal: You should steal the deal if you have a chance to do so without being noticed. This will give you an advantage in choosing the trump suit or passing it to your partner. - Loner defense: You should defend against a loner by playing your best card on the first trick, hoping to capture it or force out a high card from the loner. You should also cooperate with your partner and try to block the loner's suit.

            The Common Mistakes and Pitfalls to Avoid in Euchre

            -

            Some of the common mistakes and pitfalls to avoid in euchre are:

            -- Underbidding: You should not underbid your hand by passing a good trump suit or not going alone when you have a chance. This will cost you points and opportunities. - Overbidding: You should not overbid your hand by ordering up a bad trump suit or going alone when you have no chance. This will expose you to being euchred and losing points. - Leading trump: You should not lead trump unless you have a very good reason, such as having a bower or an ace. Leading trump will usually help your opponents by giving them a chance to play their trump cards. - Following suit: You should not follow suit unless you have to or you want to. Following suit will usually waste your high cards or give up tricks. You should try to play off suit or play trump instead. - Playing out of turn: You should not play out of turn by mistake or on purpose. Playing out of turn will usually result in a penalty or a confusion. You should pay attention to the order of play and wait for your turn.

            Conclusion

            -

            Euchre is a classic card game that you can play online or offline on your Windows 10 device. It is a game of skill, strategy, and luck that can provide you with hours of fun and challenge. You can download and install euchre games from various sources, such as the Microsoft Store, third-party websites, or online platforms. You can also learn some strategies and tips for euchre, as well as avoid some common mistakes and pitfalls. Euchre is a game that can improve your brain and social skills, as well as make you happy and relaxed. If you are looking for a free euchre download for Windows 10, you should try one of the games we recommended in this article. You will not regret it!

            -

            FAQs

            -

            Here are some frequently asked questions about euchre and their answers:

            -- Q: How many players can play euchre? - A: Euchre is usually played by four players in two teams of two, but there are also variations for two, three, or six players. - Q: What is the difference between a bower and a jack? - A: A bower is a jack of the same color as the trump suit. The right bower is the jack of the trump suit, and the left bower is the jack of the other suit of the same color. For example, if spades are trump, the right bower is the jack of spades and the left bower is the jack of clubs. The bowers are the highest cards in the game. - Q: What is the difference between a loner and a solo? - A: A loner and a solo are both terms for when a player decides to play without their partner, hoping to win all five tricks by themselves. The difference is that a loner is when the player makes the trump suit, and a solo is when the player chooses a different suit as the trump suit after all four players pass on the first round. - Q: What is the difference between a euchre and a skunk? - A: A euchre is when the team that makes the trump suit fails to win three tricks, and the other team scores two points. A skunk is when one team wins by 10 or more points, and the other team scores nothing. - Q: What are some variations of euchre? - A: There are many variations of euchre that change some aspects of the game, such as the number of cards, the scoring system, the trump suit selection, or the special rules. Some examples of euchre variations are bid euchre, railroad euchre, stick-the-dealer euchre, joker euchre, and buck euchre.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Autodesk 3ds Max 8 Download Crack Everything You Need to Know About the Software and Its Features.md b/spaces/contluForse/HuggingGPT/assets/Autodesk 3ds Max 8 Download Crack Everything You Need to Know About the Software and Its Features.md deleted file mode 100644 index 8d283e3bdbeec14edbe36593705edac049a1aa39..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Autodesk 3ds Max 8 Download Crack Everything You Need to Know About the Software and Its Features.md +++ /dev/null @@ -1,6 +0,0 @@ -

            autodesk 3ds max 8 download crack


            Download File ✑ ✑ ✑ https://ssurll.com/2uzxWH



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/dataset.py b/spaces/cooelf/Multimodal-CoT/timm/data/dataset.py deleted file mode 100644 index e719f3f6d7db178eb29fd902e85b64ac5ec09dd8..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/data/dataset.py +++ /dev/null @@ -1,146 +0,0 @@ -""" Quick n Simple Image Folder, Tarfile based DataSet - -Hacked together by / Copyright 2020 Ross Wightman -""" -import torch.utils.data as data -import os -import torch -import logging - -from PIL import Image - -from .parsers import create_parser - -_logger = logging.getLogger(__name__) - - -_ERROR_RETRY = 50 - - -class ImageDataset(data.Dataset): - - def __init__( - self, - root, - parser=None, - class_map='', - load_bytes=False, - transform=None, - ): - if parser is None or isinstance(parser, str): - parser = create_parser(parser or '', root=root, class_map=class_map) - self.parser = parser - self.load_bytes = load_bytes - self.transform = transform - self._consecutive_errors = 0 - - def __getitem__(self, index): - img, target = self.parser[index] - try: - img = img.read() if self.load_bytes else Image.open(img).convert('RGB') - except Exception as e: - _logger.warning(f'Skipped sample (index {index}, file {self.parser.filename(index)}). {str(e)}') - self._consecutive_errors += 1 - if self._consecutive_errors < _ERROR_RETRY: - return self.__getitem__((index + 1) % len(self.parser)) - else: - raise e - self._consecutive_errors = 0 - if self.transform is not None: - img = self.transform(img) - if target is None: - target = torch.tensor(-1, dtype=torch.long) - return img, target - - def __len__(self): - return len(self.parser) - - def filename(self, index, basename=False, absolute=False): - return self.parser.filename(index, basename, absolute) - - def filenames(self, basename=False, absolute=False): - return self.parser.filenames(basename, absolute) - - -class IterableImageDataset(data.IterableDataset): - - def __init__( - self, - root, - parser=None, - split='train', - is_training=False, - batch_size=None, - class_map='', - load_bytes=False, - repeats=0, - transform=None, - ): - assert parser is not None - if isinstance(parser, str): - self.parser = create_parser( - parser, root=root, split=split, is_training=is_training, batch_size=batch_size, repeats=repeats) - else: - self.parser = parser - self.transform = transform - self._consecutive_errors = 0 - - def __iter__(self): - for img, target in self.parser: - if self.transform is not None: - img = self.transform(img) - if target is None: - target = torch.tensor(-1, dtype=torch.long) - yield img, target - - def __len__(self): - if hasattr(self.parser, '__len__'): - return len(self.parser) - else: - return 0 - - def filename(self, index, basename=False, absolute=False): - assert False, 'Filename lookup by index not supported, use filenames().' - - def filenames(self, basename=False, absolute=False): - return self.parser.filenames(basename, absolute) - - -class AugMixDataset(torch.utils.data.Dataset): - """Dataset wrapper to perform AugMix or other clean/augmentation mixes""" - - def __init__(self, dataset, num_splits=2): - self.augmentation = None - self.normalize = None - self.dataset = dataset - if self.dataset.transform is not None: - self._set_transforms(self.dataset.transform) - self.num_splits = num_splits - - def _set_transforms(self, x): - assert isinstance(x, (list, tuple)) and len(x) == 3, 'Expecting a tuple/list of 3 transforms' - self.dataset.transform = x[0] - self.augmentation = x[1] - self.normalize = x[2] - - @property - def transform(self): - return self.dataset.transform - - @transform.setter - def transform(self, x): - self._set_transforms(x) - - def _normalize(self, x): - return x if self.normalize is None else self.normalize(x) - - def __getitem__(self, i): - x, y = self.dataset[i] # all splits share the same dataset base transform - x_list = [self._normalize(x)] # first split only normalizes (this is the 'clean' split) - # run the full augmentation on the remaining splits - for _ in range(self.num_splits - 1): - x_list.append(self._normalize(self.augmentation(x))) - return tuple(x_list), y - - def __len__(self): - return len(self.dataset) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/sampler/base_pixel_sampler.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/sampler/base_pixel_sampler.py deleted file mode 100644 index b75b1566c9f18169cee51d4b55d75e0357b69c57..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/sampler/base_pixel_sampler.py +++ /dev/null @@ -1,12 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BasePixelSampler(metaclass=ABCMeta): - """Base class of pixel sampler.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def sample(self, seg_logit, seg_label): - """Placeholder for sample function.""" diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/hrf.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/hrf.py deleted file mode 100644 index 923203b51377f9344277fc561803d7a78bd2c684..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/hrf.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class HRFDataset(CustomDataset): - """HRF dataset. - - In segmentation map annotation for HRF, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(HRFDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/registry.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/registry.py deleted file mode 100644 index 39eabc58db4b5954478a2ac1ab91cea5e45ab055..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/registry.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from annotator.uniformer.mmcv.utils import Registry - -CONV_LAYERS = Registry('conv layer') -NORM_LAYERS = Registry('norm layer') -ACTIVATION_LAYERS = Registry('activation layer') -PADDING_LAYERS = Registry('padding layer') -UPSAMPLE_LAYERS = Registry('upsample layer') -PLUGIN_LAYERS = Registry('plugin layer') - -DROPOUT_LAYERS = Registry('drop out layers') -POSITIONAL_ENCODING = Registry('position encoding') -ATTENTION = Registry('attention') -FEEDFORWARD_NETWORK = Registry('feed-forward Network') -TRANSFORMER_LAYER = Registry('transformerLayer') -TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence') diff --git a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/renderer/camera.py b/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/renderer/camera.py deleted file mode 100644 index e5c330a17e0166970428911a8f1ba92bb89f5034..0000000000000000000000000000000000000000 --- a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/renderer/camera.py +++ /dev/null @@ -1,207 +0,0 @@ -import cv2 -import numpy as np - -from .glm import ortho - - -class Camera: - def __init__(self, width=1600, height=1200): - # Focal Length - # equivalent 50mm - focal = np.sqrt(width * width + height * height) - self.focal_x = focal - self.focal_y = focal - # Principal Point Offset - self.principal_x = width / 2 - self.principal_y = height / 2 - # Axis Skew - self.skew = 0 - # Image Size - self.width = width - self.height = height - - self.near = 1 - self.far = 10 - - # Camera Center - self.center = np.array([0, 0, 1.6]) - self.direction = np.array([0, 0, -1]) - self.right = np.array([1, 0, 0]) - self.up = np.array([0, 1, 0]) - - self.ortho_ratio = None - - def sanity_check(self): - self.center = self.center.reshape([-1]) - self.direction = self.direction.reshape([-1]) - self.right = self.right.reshape([-1]) - self.up = self.up.reshape([-1]) - - assert len(self.center) == 3 - assert len(self.direction) == 3 - assert len(self.right) == 3 - assert len(self.up) == 3 - - @staticmethod - def normalize_vector(v): - v_norm = np.linalg.norm(v) - return v if v_norm == 0 else v / v_norm - - def get_real_z_value(self, z): - z_near = self.near - z_far = self.far - z_n = 2.0 * z - 1.0 - z_e = 2.0 * z_near * z_far / (z_far + z_near - z_n * (z_far - z_near)) - return z_e - - def get_rotation_matrix(self): - rot_mat = np.eye(3) - s = self.right - s = self.normalize_vector(s) - rot_mat[0, :] = s - u = self.up - u = self.normalize_vector(u) - rot_mat[1, :] = -u - rot_mat[2, :] = self.normalize_vector(self.direction) - - return rot_mat - - def get_translation_vector(self): - rot_mat = self.get_rotation_matrix() - trans = -np.dot(rot_mat, self.center) - return trans - - def get_intrinsic_matrix(self): - int_mat = np.eye(3) - - int_mat[0, 0] = self.focal_x - int_mat[1, 1] = self.focal_y - int_mat[0, 1] = self.skew - int_mat[0, 2] = self.principal_x - int_mat[1, 2] = self.principal_y - - return int_mat - - def get_projection_matrix(self): - ext_mat = self.get_extrinsic_matrix() - int_mat = self.get_intrinsic_matrix() - - return np.matmul(int_mat, ext_mat) - - def get_extrinsic_matrix(self): - rot_mat = self.get_rotation_matrix() - int_mat = self.get_intrinsic_matrix() - trans = self.get_translation_vector() - - extrinsic = np.eye(4) - extrinsic[:3, :3] = rot_mat - extrinsic[:3, 3] = trans - - return extrinsic[:3, :] - - def set_rotation_matrix(self, rot_mat): - self.direction = rot_mat[2, :] - self.up = -rot_mat[1, :] - self.right = rot_mat[0, :] - - def set_intrinsic_matrix(self, int_mat): - self.focal_x = int_mat[0, 0] - self.focal_y = int_mat[1, 1] - self.skew = int_mat[0, 1] - self.principal_x = int_mat[0, 2] - self.principal_y = int_mat[1, 2] - - def set_projection_matrix(self, proj_mat): - res = cv2.decomposeProjectionMatrix(proj_mat) - int_mat, rot_mat, camera_center_homo = res[0], res[1], res[2] - camera_center = camera_center_homo[0:3] / camera_center_homo[3] - camera_center = camera_center.reshape(-1) - int_mat = int_mat / int_mat[2][2] - - self.set_intrinsic_matrix(int_mat) - self.set_rotation_matrix(rot_mat) - self.center = camera_center - - self.sanity_check() - - def get_gl_matrix(self): - z_near = self.near - z_far = self.far - rot_mat = self.get_rotation_matrix() - int_mat = self.get_intrinsic_matrix() - trans = self.get_translation_vector() - - extrinsic = np.eye(4) - extrinsic[:3, :3] = rot_mat - extrinsic[:3, 3] = trans - axis_adj = np.eye(4) - axis_adj[2, 2] = -1 - axis_adj[1, 1] = -1 - model_view = np.matmul(axis_adj, extrinsic) - - projective = np.zeros([4, 4]) - projective[:2, :2] = int_mat[:2, :2] - projective[:2, 2:3] = -int_mat[:2, 2:3] - projective[3, 2] = -1 - projective[2, 2] = (z_near + z_far) - projective[2, 3] = (z_near * z_far) - - if self.ortho_ratio is None: - ndc = ortho(0, self.width, 0, self.height, z_near, z_far) - perspective = np.matmul(ndc, projective) - else: - perspective = ortho(-self.width * self.ortho_ratio / 2, self.width * self.ortho_ratio / 2, - -self.height * self.ortho_ratio / 2, self.height * self.ortho_ratio / 2, - z_near, z_far) - - return perspective, model_view - - -def KRT_from_P(proj_mat, normalize_K=True): - res = cv2.decomposeProjectionMatrix(proj_mat) - K, Rot, camera_center_homog = res[0], res[1], res[2] - camera_center = camera_center_homog[0:3] / camera_center_homog[3] - trans = -Rot.dot(camera_center) - if normalize_K: - K = K / K[2][2] - return K, Rot, trans - - -def MVP_from_P(proj_mat, width, height, near=0.1, far=10000): - ''' - Convert OpenCV camera calibration matrix to OpenGL projection and model view matrix - :param proj_mat: OpenCV camera projeciton matrix - :param width: Image width - :param height: Image height - :param near: Z near value - :param far: Z far value - :return: OpenGL projection matrix and model view matrix - ''' - res = cv2.decomposeProjectionMatrix(proj_mat) - K, Rot, camera_center_homog = res[0], res[1], res[2] - camera_center = camera_center_homog[0:3] / camera_center_homog[3] - trans = -Rot.dot(camera_center) - K = K / K[2][2] - - extrinsic = np.eye(4) - extrinsic[:3, :3] = Rot - extrinsic[:3, 3:4] = trans - axis_adj = np.eye(4) - axis_adj[2, 2] = -1 - axis_adj[1, 1] = -1 - model_view = np.matmul(axis_adj, extrinsic) - - zFar = far - zNear = near - projective = np.zeros([4, 4]) - projective[:2, :2] = K[:2, :2] - projective[:2, 2:3] = -K[:2, 2:3] - projective[3, 2] = -1 - projective[2, 2] = (zNear + zFar) - projective[2, 3] = (zNear * zFar) - - ndc = ortho(0, width, 0, height, zNear, zFar) - - perspective = np.matmul(ndc, projective) - - return perspective, model_view diff --git a/spaces/crylake/img2poem/query2labels/lib/utils/cutout.py b/spaces/crylake/img2poem/query2labels/lib/utils/cutout.py deleted file mode 100644 index 0dd0951fc50ccf2fbd0c1991af45ff64eaff8921..0000000000000000000000000000000000000000 --- a/spaces/crylake/img2poem/query2labels/lib/utils/cutout.py +++ /dev/null @@ -1,95 +0,0 @@ -import torch -import numpy as np -from PIL import ImageDraw -import random - -class SLCutoutPIL(object): - def __init__(self, n_holes, length, cut_fact=None): - self.n_holes = n_holes - self.length = length - self.cut_fact = cut_fact - if self.cut_fact is not None: - assert length < 0, "length must be set to -1 but {} if cut_fact is not None!".format(length) - - def __call__(self, x): - img_draw = ImageDraw.Draw(x) - h, w = x.size[0], x.size[1] # HWC - if self.cut_fact is not None: - h_cutout = int(self.cutout_factor * h) - w_cutout = int(self.cutout_factor * w) - else: - h_cutout = int(self.length) - w_cutout = int(self.length) - for i in range(self.n_holes): - y_c = np.random.randint(h) - x_c = np.random.randint(w) - - y1 = np.clip(y_c - h_cutout // 2, 0, h) - y2 = np.clip(y_c + h_cutout // 2, 0, h) - x1 = np.clip(x_c - w_cutout // 2, 0, w) - x2 = np.clip(x_c + w_cutout // 2, 0, w) - fill_color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)) - img_draw.rectangle([x1, y1, x2, y2], fill=fill_color) - - return x - -class CutoutPIL(object): - def __init__(self, cutout_factor=0.5): - self.cutout_factor = cutout_factor - - def __call__(self, x): - img_draw = ImageDraw.Draw(x) - h, w = x.size[0], x.size[1] # HWC - h_cutout = int(self.cutout_factor * h + 0.5) - w_cutout = int(self.cutout_factor * w + 0.5) - y_c = np.random.randint(h) - x_c = np.random.randint(w) - - y1 = np.clip(y_c - h_cutout // 2, 0, h) - y2 = np.clip(y_c + h_cutout // 2, 0, h) - x1 = np.clip(x_c - w_cutout // 2, 0, w) - x2 = np.clip(x_c + w_cutout // 2, 0, w) - fill_color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)) - img_draw.rectangle([x1, y1, x2, y2], fill=fill_color) - - return x - -class Cutout(object): - """Randomly mask out one or more patches from an image. - - Args: - n_holes (int): Number of patches to cut out of each image. - length (int): The length (in pixels) of each square patch. - """ - def __init__(self, n_holes, length): - self.n_holes = n_holes - self.length = length - - def __call__(self, img): - """ - Args: - img (Tensor): Tensor image of size (C, H, W). - Returns: - Tensor: Image with n_holes of dimension length x length cut out of it. - """ - h = img.size(1) - w = img.size(2) - - mask = np.ones((h, w), np.float32) - - for n in range(self.n_holes): - y = np.random.randint(h) - x = np.random.randint(w) - - y1 = np.clip(y - self.length // 2, 0, h) - y2 = np.clip(y + self.length // 2, 0, h) - x1 = np.clip(x - self.length // 2, 0, w) - x2 = np.clip(x + self.length // 2, 0, w) - - mask[y1: y2, x1: x2] = 0. - - mask = torch.from_numpy(mask) - mask = mask.expand_as(img) - img = img * mask - - return img diff --git a/spaces/csuhan/opendet2/opendet2/data/voc_coco.py b/spaces/csuhan/opendet2/opendet2/data/voc_coco.py deleted file mode 100644 index 4e5b39fc33afec0c49c32dfd3882b95fe9244bf4..0000000000000000000000000000000000000000 --- a/spaces/csuhan/opendet2/opendet2/data/voc_coco.py +++ /dev/null @@ -1,35 +0,0 @@ -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets import load_voc_instances - -VOC_COCO_CATEGORIES = [ - # VOC - "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", - "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", - "pottedplant", "sheep", "sofa", "train", "tvmonitor", - # COCO-20-40 - "truck", "traffic light", "fire hydrant", "stop sign", "parking meter", - "bench", "elephant", "bear", "zebra", "giraffe", - "backpack", "umbrella", "handbag", "tie", "suitcase", - "microwave", "oven", "toaster", "sink", "refrigerator", - # COCO-40-60 - "frisbee", "skis", "snowboard", "sports ball", "kite", - "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", - "banana", "apple", "sandwich", "orange", "broccoli", - "carrot", "hot dog", "pizza", "donut", "cake", - # COCO-60-80 - "bed", "toilet", "laptop", "mouse", - "remote", "keyboard", "cell phone", "book", "clock", - "vase", "scissors", "teddy bear", "hair drier", "toothbrush", - "wine glass", "cup", "fork", "knife", "spoon", "bowl", - # Unknown - "unknown", -] - - -def register_voc_coco(name, dirname, split, year): - class_names = VOC_COCO_CATEGORIES - DatasetCatalog.register( - name, lambda: load_voc_instances(dirname, split, class_names)) - MetadataCatalog.get(name).set( - thing_classes=list(class_names), dirname=dirname, year=year, split=split - ) diff --git a/spaces/cvlab/zero123-live/taming-transformers/main.py b/spaces/cvlab/zero123-live/taming-transformers/main.py deleted file mode 100644 index 5a992a65d1465457f2685ec9f5f63f316d9e3164..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/taming-transformers/main.py +++ /dev/null @@ -1,585 +0,0 @@ -import argparse, os, sys, datetime, glob, importlib -from omegaconf import OmegaConf -import numpy as np -from PIL import Image -import torch -import torchvision -from torch.utils.data import random_split, DataLoader, Dataset -import pytorch_lightning as pl -from pytorch_lightning import seed_everything -from pytorch_lightning.trainer import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint, Callback, LearningRateMonitor -from pytorch_lightning.utilities import rank_zero_only - -from taming.data.utils import custom_collate - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def get_parser(**parser_kwargs): - def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - parser = argparse.ArgumentParser(**parser_kwargs) - parser.add_argument( - "-n", - "--name", - type=str, - const=True, - default="", - nargs="?", - help="postfix for logdir", - ) - parser.add_argument( - "-r", - "--resume", - type=str, - const=True, - default="", - nargs="?", - help="resume from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-b", - "--base", - nargs="*", - metavar="base_config.yaml", - help="paths to base configs. Loaded from left-to-right. " - "Parameters can be overwritten or added with command-line options of the form `--key value`.", - default=list(), - ) - parser.add_argument( - "-t", - "--train", - type=str2bool, - const=True, - default=False, - nargs="?", - help="train", - ) - parser.add_argument( - "--no-test", - type=str2bool, - const=True, - default=False, - nargs="?", - help="disable test", - ) - parser.add_argument("-p", "--project", help="name of new or path to existing project") - parser.add_argument( - "-d", - "--debug", - type=str2bool, - nargs="?", - const=True, - default=False, - help="enable post-mortem debugging", - ) - parser.add_argument( - "-s", - "--seed", - type=int, - default=23, - help="seed for seed_everything", - ) - parser.add_argument( - "-f", - "--postfix", - type=str, - default="", - help="post-postfix for default name", - ) - - return parser - - -def nondefault_trainer_args(opt): - parser = argparse.ArgumentParser() - parser = Trainer.add_argparse_args(parser) - args = parser.parse_args([]) - return sorted(k for k in vars(args) if getattr(opt, k) != getattr(args, k)) - - -def instantiate_from_config(config): - if not "target" in config: - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -class WrappedDataset(Dataset): - """Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset""" - def __init__(self, dataset): - self.data = dataset - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - return self.data[idx] - - -class DataModuleFromConfig(pl.LightningDataModule): - def __init__(self, batch_size, train=None, validation=None, test=None, - wrap=False, num_workers=None): - super().__init__() - self.batch_size = batch_size - self.dataset_configs = dict() - self.num_workers = num_workers if num_workers is not None else batch_size*2 - if train is not None: - self.dataset_configs["train"] = train - self.train_dataloader = self._train_dataloader - if validation is not None: - self.dataset_configs["validation"] = validation - self.val_dataloader = self._val_dataloader - if test is not None: - self.dataset_configs["test"] = test - self.test_dataloader = self._test_dataloader - self.wrap = wrap - - def prepare_data(self): - for data_cfg in self.dataset_configs.values(): - instantiate_from_config(data_cfg) - - def setup(self, stage=None): - self.datasets = dict( - (k, instantiate_from_config(self.dataset_configs[k])) - for k in self.dataset_configs) - if self.wrap: - for k in self.datasets: - self.datasets[k] = WrappedDataset(self.datasets[k]) - - def _train_dataloader(self): - return DataLoader(self.datasets["train"], batch_size=self.batch_size, - num_workers=self.num_workers, shuffle=True, collate_fn=custom_collate) - - def _val_dataloader(self): - return DataLoader(self.datasets["validation"], - batch_size=self.batch_size, - num_workers=self.num_workers, collate_fn=custom_collate) - - def _test_dataloader(self): - return DataLoader(self.datasets["test"], batch_size=self.batch_size, - num_workers=self.num_workers, collate_fn=custom_collate) - - -class SetupCallback(Callback): - def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config): - super().__init__() - self.resume = resume - self.now = now - self.logdir = logdir - self.ckptdir = ckptdir - self.cfgdir = cfgdir - self.config = config - self.lightning_config = lightning_config - - def on_pretrain_routine_start(self, trainer, pl_module): - if trainer.global_rank == 0: - # Create logdirs and save configs - os.makedirs(self.logdir, exist_ok=True) - os.makedirs(self.ckptdir, exist_ok=True) - os.makedirs(self.cfgdir, exist_ok=True) - - print("Project config") - print(self.config.pretty()) - OmegaConf.save(self.config, - os.path.join(self.cfgdir, "{}-project.yaml".format(self.now))) - - print("Lightning config") - print(self.lightning_config.pretty()) - OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}), - os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now))) - - else: - # ModelCheckpoint callback created log directory --- remove it - if not self.resume and os.path.exists(self.logdir): - dst, name = os.path.split(self.logdir) - dst = os.path.join(dst, "child_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - try: - os.rename(self.logdir, dst) - except FileNotFoundError: - pass - - -class ImageLogger(Callback): - def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True): - super().__init__() - self.batch_freq = batch_frequency - self.max_images = max_images - self.logger_log_images = { - pl.loggers.WandbLogger: self._wandb, - pl.loggers.TestTubeLogger: self._testtube, - } - self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)] - if not increase_log_steps: - self.log_steps = [self.batch_freq] - self.clamp = clamp - - @rank_zero_only - def _wandb(self, pl_module, images, batch_idx, split): - raise ValueError("No way wandb") - grids = dict() - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grids[f"{split}/{k}"] = wandb.Image(grid) - pl_module.logger.experiment.log(grids) - - @rank_zero_only - def _testtube(self, pl_module, images, batch_idx, split): - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grid = (grid+1.0)/2.0 # -1,1 -> 0,1; c,h,w - - tag = f"{split}/{k}" - pl_module.logger.experiment.add_image( - tag, grid, - global_step=pl_module.global_step) - - @rank_zero_only - def log_local(self, save_dir, split, images, - global_step, current_epoch, batch_idx): - root = os.path.join(save_dir, "images", split) - for k in images: - grid = torchvision.utils.make_grid(images[k], nrow=4) - - grid = (grid+1.0)/2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0,1).transpose(1,2).squeeze(-1) - grid = grid.numpy() - grid = (grid*255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format( - k, - global_step, - current_epoch, - batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - - def log_img(self, pl_module, batch, batch_idx, split="train"): - if (self.check_frequency(batch_idx) and # batch_idx % self.batch_freq == 0 - hasattr(pl_module, "log_images") and - callable(pl_module.log_images) and - self.max_images > 0): - logger = type(pl_module.logger) - - is_train = pl_module.training - if is_train: - pl_module.eval() - - with torch.no_grad(): - images = pl_module.log_images(batch, split=split, pl_module=pl_module) - - for k in images: - N = min(images[k].shape[0], self.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if self.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - - self.log_local(pl_module.logger.save_dir, split, images, - pl_module.global_step, pl_module.current_epoch, batch_idx) - - logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None) - logger_log_images(pl_module, images, pl_module.global_step, split) - - if is_train: - pl_module.train() - - def check_frequency(self, batch_idx): - if (batch_idx % self.batch_freq) == 0 or (batch_idx in self.log_steps): - try: - self.log_steps.pop(0) - except IndexError: - pass - return True - return False - - def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - self.log_img(pl_module, batch, batch_idx, split="train") - - def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - self.log_img(pl_module, batch, batch_idx, split="val") - - - -if __name__ == "__main__": - # custom parser to specify config files, train, test and debug mode, - # postfix, resume. - # `--key value` arguments are interpreted as arguments to the trainer. - # `nested.key=value` arguments are interpreted as config parameters. - # configs are merged from left-to-right followed by command line parameters. - - # model: - # base_learning_rate: float - # target: path to lightning module - # params: - # key: value - # data: - # target: main.DataModuleFromConfig - # params: - # batch_size: int - # wrap: bool - # train: - # target: path to train dataset - # params: - # key: value - # validation: - # target: path to validation dataset - # params: - # key: value - # test: - # target: path to test dataset - # params: - # key: value - # lightning: (optional, has sane defaults and can be specified on cmdline) - # trainer: - # additional arguments to trainer - # logger: - # logger to instantiate - # modelcheckpoint: - # modelcheckpoint to instantiate - # callbacks: - # callback1: - # target: importpath - # params: - # key: value - - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - - # add cwd for convenience and to make classes in this file available when - # running as `python main.py` - # (in particular `main.DataModuleFromConfig`) - sys.path.append(os.getcwd()) - - parser = get_parser() - parser = Trainer.add_argparse_args(parser) - - opt, unknown = parser.parse_known_args() - if opt.name and opt.resume: - raise ValueError( - "-n/--name and -r/--resume cannot be specified both." - "If you want to resume training in a new log folder, " - "use -n/--name in combination with --resume_from_checkpoint" - ) - if opt.resume: - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - paths = opt.resume.split("/") - idx = len(paths)-paths[::-1].index("logs")+1 - logdir = "/".join(paths[:idx]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), opt.resume - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "checkpoints", "last.ckpt") - - opt.resume_from_checkpoint = ckpt - base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml"))) - opt.base = base_configs+opt.base - _tmp = logdir.split("/") - nowname = _tmp[_tmp.index("logs")+1] - else: - if opt.name: - name = "_"+opt.name - elif opt.base: - cfg_fname = os.path.split(opt.base[0])[-1] - cfg_name = os.path.splitext(cfg_fname)[0] - name = "_"+cfg_name - else: - name = "" - nowname = now+name+opt.postfix - logdir = os.path.join("logs", nowname) - - ckptdir = os.path.join(logdir, "checkpoints") - cfgdir = os.path.join(logdir, "configs") - seed_everything(opt.seed) - - try: - # init and save configs - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - config = OmegaConf.merge(*configs, cli) - lightning_config = config.pop("lightning", OmegaConf.create()) - # merge trainer cli with config - trainer_config = lightning_config.get("trainer", OmegaConf.create()) - # default to ddp - trainer_config["distributed_backend"] = "ddp" - for k in nondefault_trainer_args(opt): - trainer_config[k] = getattr(opt, k) - if not "gpus" in trainer_config: - del trainer_config["distributed_backend"] - cpu = True - else: - gpuinfo = trainer_config["gpus"] - print(f"Running on GPUs {gpuinfo}") - cpu = False - trainer_opt = argparse.Namespace(**trainer_config) - lightning_config.trainer = trainer_config - - # model - model = instantiate_from_config(config.model) - - # trainer and callbacks - trainer_kwargs = dict() - - # default logger configs - # NOTE wandb < 0.10.0 interferes with shutdown - # wandb >= 0.10.0 seems to fix it but still interferes with pudb - # debugging (wrongly sized pudb ui) - # thus prefer testtube for now - default_logger_cfgs = { - "wandb": { - "target": "pytorch_lightning.loggers.WandbLogger", - "params": { - "name": nowname, - "save_dir": logdir, - "offline": opt.debug, - "id": nowname, - } - }, - "testtube": { - "target": "pytorch_lightning.loggers.TestTubeLogger", - "params": { - "name": "testtube", - "save_dir": logdir, - } - }, - } - default_logger_cfg = default_logger_cfgs["testtube"] - logger_cfg = lightning_config.logger or OmegaConf.create() - logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg) - trainer_kwargs["logger"] = instantiate_from_config(logger_cfg) - - # modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to - # specify which metric is used to determine best models - default_modelckpt_cfg = { - "target": "pytorch_lightning.callbacks.ModelCheckpoint", - "params": { - "dirpath": ckptdir, - "filename": "{epoch:06}", - "verbose": True, - "save_last": True, - } - } - if hasattr(model, "monitor"): - print(f"Monitoring {model.monitor} as checkpoint metric.") - default_modelckpt_cfg["params"]["monitor"] = model.monitor - default_modelckpt_cfg["params"]["save_top_k"] = 3 - - modelckpt_cfg = lightning_config.modelcheckpoint or OmegaConf.create() - modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg) - trainer_kwargs["checkpoint_callback"] = instantiate_from_config(modelckpt_cfg) - - # add callback which sets up log directory - default_callbacks_cfg = { - "setup_callback": { - "target": "main.SetupCallback", - "params": { - "resume": opt.resume, - "now": now, - "logdir": logdir, - "ckptdir": ckptdir, - "cfgdir": cfgdir, - "config": config, - "lightning_config": lightning_config, - } - }, - "image_logger": { - "target": "main.ImageLogger", - "params": { - "batch_frequency": 750, - "max_images": 4, - "clamp": True - } - }, - "learning_rate_logger": { - "target": "main.LearningRateMonitor", - "params": { - "logging_interval": "step", - #"log_momentum": True - } - }, - } - callbacks_cfg = lightning_config.callbacks or OmegaConf.create() - callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg) - trainer_kwargs["callbacks"] = [instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg] - - trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs) - - # data - data = instantiate_from_config(config.data) - # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html - # calling these ourselves should not be necessary but it is. - # lightning still takes care of proper multiprocessing though - data.prepare_data() - data.setup() - - # configure learning rate - bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate - if not cpu: - ngpu = len(lightning_config.trainer.gpus.strip(",").split(',')) - else: - ngpu = 1 - accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches or 1 - print(f"accumulate_grad_batches = {accumulate_grad_batches}") - lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches - model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr - print("Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format( - model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr)) - - # allow checkpointing via USR1 - def melk(*args, **kwargs): - # run all checkpoint hooks - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - def divein(*args, **kwargs): - if trainer.global_rank == 0: - import pudb; pudb.set_trace() - - import signal - signal.signal(signal.SIGUSR1, melk) - signal.signal(signal.SIGUSR2, divein) - - # run - if opt.train: - try: - trainer.fit(model, data) - except Exception: - melk() - raise - if not opt.no_test and not trainer.interrupted: - trainer.test(model, data) - except Exception: - if opt.debug and trainer.global_rank==0: - try: - import pudb as debugger - except ImportError: - import pdb as debugger - debugger.post_mortem() - raise - finally: - # move newly created debug project to debug_runs - if opt.debug and not opt.resume and trainer.global_rank==0: - dst, name = os.path.split(logdir) - dst = os.path.join(dst, "debug_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - os.rename(logdir, dst) diff --git a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/latex/attention/background.tex b/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/latex/attention/background.tex deleted file mode 100644 index 785069dc0f9143bad24e640056dd1072d5c6e5b5..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/latex/attention/background.tex +++ /dev/null @@ -1,58 +0,0 @@ -The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}. - -Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}. - -End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}. - -To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution. -In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}. - - -%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs. - -%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation. - -%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture. - -%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost. - -%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length. - -%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs. - -%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture. - - - -%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)? - -%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence. - -%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model. - -%\begin{table}[h!] -%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.} -%\label{tab:op_complexities} -%\begin{center} -%\vspace{-5pt} -%\scalebox{0.75}{ - -%\begin{tabular}{l|c|c|c} -%\hline \hline -%Layer Type & Receptive & Complexity & Sequential \\ -% & Field & & Operations \\ -%\hline -%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\ -%\hline -%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\ -%\hline -%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\ -%\hline -%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\ -%\hline -%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\ -%\hline \hline -%\end{tabular} -%} -%\end{center} -%\end{table} \ No newline at end of file diff --git a/spaces/dashues/frieda/README.md b/spaces/dashues/frieda/README.md deleted file mode 100644 index 253bb81dd8e792d38c504ddc0ff49124274771ec..0000000000000000000000000000000000000000 --- a/spaces/dashues/frieda/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Frieda -emoji: 🌖 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/OTTags.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/OTTags.py deleted file mode 100644 index 859a3bcdcf29bcdda827edad766ffeab2f0b636a..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/OTTags.py +++ /dev/null @@ -1,50 +0,0 @@ -# Data updated to OpenType 1.8.2 as of January 2018. - -# Complete list of OpenType script tags at: -# https://www.microsoft.com/typography/otspec/scripttags.htm - -# Most of the script tags are the same as the ISO 15924 tag but lowercased, -# so we only have to handle the exceptional cases: -# - KATAKANA and HIRAGANA both map to 'kana'; -# - spaces at the end are preserved, unlike ISO 15924; -# - we map special script codes for Inherited, Common and Unknown to DFLT. - -DEFAULT_SCRIPT = "DFLT" - -SCRIPT_ALIASES = { - "jamo": "hang", -} - -SCRIPT_EXCEPTIONS = { - "Hira": "kana", - "Hrkt": "kana", - "Laoo": "lao ", - "Yiii": "yi ", - "Nkoo": "nko ", - "Vaii": "vai ", - "Zmth": "math", - "Zinh": DEFAULT_SCRIPT, - "Zyyy": DEFAULT_SCRIPT, - "Zzzz": DEFAULT_SCRIPT, -} - -SCRIPT_EXCEPTIONS_REVERSED = { - "math": "Zmth", -} - -NEW_SCRIPT_TAGS = { - "Beng": ("bng2",), - "Deva": ("dev2",), - "Gujr": ("gjr2",), - "Guru": ("gur2",), - "Knda": ("knd2",), - "Mlym": ("mlm2",), - "Orya": ("ory2",), - "Taml": ("tml2",), - "Telu": ("tel2",), - "Mymr": ("mym2",), -} - -NEW_SCRIPT_TAGS_REVERSED = { - value: key for key, values in NEW_SCRIPT_TAGS.items() for value in values -} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/yaml-95012b83.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/yaml-95012b83.js deleted file mode 100644 index 3fef68bd6d3b922eebf9622184021189fa7e8cc2..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/yaml-95012b83.js +++ /dev/null @@ -1,2 +0,0 @@ -var l=["true","false","on","off","yes","no"],f=new RegExp("\\b(("+l.join(")|(")+"))$","i");const a={name:"yaml",token:function(n,i){var r=n.peek(),e=i.escaped;if(i.escaped=!1,r=="#"&&(n.pos==0||/\s/.test(n.string.charAt(n.pos-1))))return n.skipToEnd(),"comment";if(n.match(/^('([^']|\\.)*'?|"([^"]|\\.)*"?)/))return"string";if(i.literal&&n.indentation()>i.keyCol)return n.skipToEnd(),"string";if(i.literal&&(i.literal=!1),n.sol()){if(i.keyCol=0,i.pair=!1,i.pairStart=!1,n.match("---")||n.match("..."))return"def";if(n.match(/^\s*-\s+/))return"meta"}if(n.match(/^(\{|\}|\[|\])/))return r=="{"?i.inlinePairs++:r=="}"?i.inlinePairs--:r=="["?i.inlineList++:i.inlineList--,"meta";if(i.inlineList>0&&!e&&r==",")return n.next(),"meta";if(i.inlinePairs>0&&!e&&r==",")return i.keyCol=0,i.pair=!1,i.pairStart=!1,n.next(),"meta";if(i.pairStart){if(n.match(/^\s*(\||\>)\s*/))return i.literal=!0,"meta";if(n.match(/^\s*(\&|\*)[a-z0-9\._-]+\b/i))return"variable";if(i.inlinePairs==0&&n.match(/^\s*-?[0-9\.\,]+\s?$/)||i.inlinePairs>0&&n.match(/^\s*-?[0-9\.\,]+\s?(?=(,|}))/))return"number";if(n.match(f))return"keyword"}return!i.pair&&n.match(/^\s*(?:[,\[\]{}&*!|>'"%@`][^\s'":]|[^,\[\]{}#&*!|>'"%@`])[^#]*?(?=\s*:($|\s))/)?(i.pair=!0,i.keyCol=n.indentation(),"atom"):i.pair&&n.match(/^:\s*/)?(i.pairStart=!0,"meta"):(i.pairStart=!1,i.escaped=r=="\\",n.next(),null)},startState:function(){return{pair:!1,pairStart:!1,keyCol:0,inlinePairs:0,inlineList:0,literal:!1,escaped:!1}},languageData:{commentTokens:{line:"#"}}};export{a as yaml}; -//# sourceMappingURL=yaml-95012b83.js.map diff --git a/spaces/ddosxd/sydney-inpaint/config.py b/spaces/ddosxd/sydney-inpaint/config.py deleted file mode 100644 index 4d104551bded8c84b2d05166aa639bfc8992606f..0000000000000000000000000000000000000000 --- a/spaces/ddosxd/sydney-inpaint/config.py +++ /dev/null @@ -1,2 +0,0 @@ -SEEDCOUNT = 10 -NUM = 7 diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/alt_diffusion/modeling_roberta_series.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/alt_diffusion/modeling_roberta_series.py deleted file mode 100644 index 637d6dd18698f3c6f1787c5e4d4514e4fc254908..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/alt_diffusion/modeling_roberta_series.py +++ /dev/null @@ -1,109 +0,0 @@ -from dataclasses import dataclass -from typing import Optional, Tuple - -import torch -from torch import nn -from transformers import RobertaPreTrainedModel, XLMRobertaConfig, XLMRobertaModel -from transformers.utils import ModelOutput - - -@dataclass -class TransformationModelOutput(ModelOutput): - """ - Base class for text model's outputs that also contains a pooling of the last hidden states. - - Args: - text_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)` *optional* returned when model is initialized with `with_projection=True`): - The text embeddings obtained by applying the projection layer to the pooler_output. - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + - one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - projection_state: Optional[torch.FloatTensor] = None - last_hidden_state: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -class RobertaSeriesConfig(XLMRobertaConfig): - def __init__( - self, - pad_token_id=1, - bos_token_id=0, - eos_token_id=2, - project_dim=512, - pooler_fn="cls", - learn_encoder=False, - use_attention_mask=True, - **kwargs, - ): - super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) - self.project_dim = project_dim - self.pooler_fn = pooler_fn - self.learn_encoder = learn_encoder - self.use_attention_mask = use_attention_mask - - -class RobertaSeriesModelWithTransformation(RobertaPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - base_model_prefix = "roberta" - config_class = RobertaSeriesConfig - - def __init__(self, config): - super().__init__(config) - self.roberta = XLMRobertaModel(config) - self.transformation = nn.Linear(config.hidden_size, config.project_dim) - self.post_init() - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - ): - r""" """ - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.base_model( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - projection_state = self.transformation(outputs.last_hidden_state) - - return TransformationModelOutput( - projection_state=projection_state, - last_hidden_state=outputs.last_hidden_state, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py deleted file mode 100644 index fa3c3d628e4f1ec74c6729db436e4f20c0e714c5..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py +++ /dev/null @@ -1,563 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -import numpy as np -import torch -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, - logging, -) -from diffusers.utils import load_numpy, nightly, slow, torch_device -from diffusers.utils.testing_utils import CaptureLogger, require_torch_gpu - -from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS -from ...test_pipelines_common import PipelineTesterMixin - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class StableDiffusion2PipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = StableDiffusionPipeline - params = TEXT_TO_IMAGE_PARAMS - batch_params = TEXT_TO_IMAGE_BATCH_PARAMS - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - # SD2-specific config below - attention_head_dim=(2, 4), - use_linear_projection=True, - ) - scheduler = DDIMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - clip_sample=False, - set_alpha_to_one=False, - ) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - sample_size=128, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - # SD2-specific config below - hidden_act="gelu", - projection_dim=512, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "safety_checker": None, - "feature_extractor": None, - } - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 6.0, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_ddim(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5649, 0.6022, 0.4804, 0.5270, 0.5585, 0.4643, 0.5159, 0.4963, 0.4793]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_pndm(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = PNDMScheduler(skip_prk_steps=True) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5099, 0.5677, 0.4671, 0.5128, 0.5697, 0.4676, 0.5277, 0.4964, 0.4946]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_k_lms(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.4717, 0.5376, 0.4568, 0.5225, 0.5734, 0.4797, 0.5467, 0.5074, 0.5043]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_k_euler_ancestral(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = EulerAncestralDiscreteScheduler.from_config(components["scheduler"].config) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.4715, 0.5376, 0.4569, 0.5224, 0.5734, 0.4797, 0.5465, 0.5074, 0.5046]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_k_euler(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = EulerDiscreteScheduler.from_config(components["scheduler"].config) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.4717, 0.5376, 0.4568, 0.5225, 0.5734, 0.4797, 0.5467, 0.5074, 0.5043]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_long_prompt(self): - components = self.get_dummy_components() - components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - do_classifier_free_guidance = True - negative_prompt = None - num_images_per_prompt = 1 - logger = logging.get_logger("diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion") - - prompt = 25 * "@" - with CaptureLogger(logger) as cap_logger_3: - text_embeddings_3 = sd_pipe._encode_prompt( - prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - prompt = 100 * "@" - with CaptureLogger(logger) as cap_logger: - text_embeddings = sd_pipe._encode_prompt( - prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - negative_prompt = "Hello" - with CaptureLogger(logger) as cap_logger_2: - text_embeddings_2 = sd_pipe._encode_prompt( - prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - assert text_embeddings_3.shape == text_embeddings_2.shape == text_embeddings.shape - assert text_embeddings.shape[1] == 77 - - assert cap_logger.out == cap_logger_2.out - # 100 - 77 + 1 (BOS token) + 1 (EOS token) = 25 - assert cap_logger.out.count("@") == 25 - assert cap_logger_3.out == "" - - -@slow -@require_torch_gpu -class StableDiffusion2PipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): - generator = torch.Generator(device=generator_device).manual_seed(seed) - latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64)) - latents = torch.from_numpy(latents).to(device=device, dtype=dtype) - inputs = { - "prompt": "a photograph of an astronaut riding a horse", - "latents": latents, - "generator": generator, - "num_inference_steps": 3, - "guidance_scale": 7.5, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_default_ddim(self): - pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base") - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.49493, 0.47896, 0.40798, 0.54214, 0.53212, 0.48202, 0.47656, 0.46329, 0.48506]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_pndm(self): - pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base") - pipe.scheduler = PNDMScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.49493, 0.47896, 0.40798, 0.54214, 0.53212, 0.48202, 0.47656, 0.46329, 0.48506]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_k_lms(self): - pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base") - pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.10440, 0.13115, 0.11100, 0.10141, 0.11440, 0.07215, 0.11332, 0.09693, 0.10006]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_attention_slicing(self): - torch.cuda.reset_peak_memory_stats() - pipe = StableDiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-base", torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - # enable attention slicing - pipe.enable_attention_slicing() - inputs = self.get_inputs(torch_device, dtype=torch.float16) - image_sliced = pipe(**inputs).images - - mem_bytes = torch.cuda.max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - # make sure that less than 3.3 GB is allocated - assert mem_bytes < 3.3 * 10**9 - - # disable slicing - pipe.disable_attention_slicing() - inputs = self.get_inputs(torch_device, dtype=torch.float16) - image = pipe(**inputs).images - - # make sure that more than 3.3 GB is allocated - mem_bytes = torch.cuda.max_memory_allocated() - assert mem_bytes > 3.3 * 10**9 - assert np.abs(image_sliced - image).max() < 1e-3 - - def test_stable_diffusion_text2img_intermediate_state(self): - number_of_steps = 0 - - def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None: - callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 1: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array( - [-0.3862, -0.4507, -1.1729, 0.0686, -1.1045, 0.7124, -1.8301, 0.1903, 1.2773] - ) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - elif step == 2: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array( - [0.2720, -0.1863, -0.7383, -0.5029, -0.7534, 0.3970, -0.7646, 0.4468, 1.2686] - ) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - - callback_fn.has_been_called = False - - pipe = StableDiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-base", torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs(torch_device, dtype=torch.float16) - pipe(**inputs, callback=callback_fn, callback_steps=1) - assert callback_fn.has_been_called - assert number_of_steps == inputs["num_inference_steps"] - - def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableDiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-base", torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing(1) - pipe.enable_sequential_cpu_offload() - - inputs = self.get_inputs(torch_device, dtype=torch.float16) - _ = pipe(**inputs) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 2.8 GB is allocated - assert mem_bytes < 2.8 * 10**9 - - def test_stable_diffusion_pipeline_with_model_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - inputs = self.get_inputs(torch_device, dtype=torch.float16) - - # Normal inference - - pipe = StableDiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-base", - torch_dtype=torch.float16, - ) - pipe.unet.set_default_attn_processor() - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - outputs = pipe(**inputs) - mem_bytes = torch.cuda.max_memory_allocated() - - # With model offloading - - # Reload but don't move to cuda - pipe = StableDiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-base", - torch_dtype=torch.float16, - ) - pipe.unet.set_default_attn_processor() - - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - inputs = self.get_inputs(torch_device, dtype=torch.float16) - outputs_offloaded = pipe(**inputs) - mem_bytes_offloaded = torch.cuda.max_memory_allocated() - - assert np.abs(outputs.images - outputs_offloaded.images).max() < 1e-3 - assert mem_bytes_offloaded < mem_bytes - assert mem_bytes_offloaded < 3 * 10**9 - for module in pipe.text_encoder, pipe.unet, pipe.vae: - assert module.device == torch.device("cpu") - - # With attention slicing - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe.enable_attention_slicing() - _ = pipe(**inputs) - mem_bytes_slicing = torch.cuda.max_memory_allocated() - assert mem_bytes_slicing < mem_bytes_offloaded - - -@nightly -@require_torch_gpu -class StableDiffusion2PipelineNightlyTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): - generator = torch.Generator(device=generator_device).manual_seed(seed) - latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64)) - latents = torch.from_numpy(latents).to(device=device, dtype=dtype) - inputs = { - "prompt": "a photograph of an astronaut riding a horse", - "latents": latents, - "generator": generator, - "num_inference_steps": 50, - "guidance_scale": 7.5, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_2_0_default_ddim(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base").to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_2_text2img/stable_diffusion_2_0_base_ddim.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_2_1_default_pndm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_pndm.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_ddim(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device) - sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_ddim.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_lms(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device) - sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_lms.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_euler(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device) - sd_pipe.scheduler = EulerDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_euler.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_dpm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device) - sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - inputs["num_inference_steps"] = 25 - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_dpm_multi.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 diff --git a/spaces/declare-lab/tango/diffusers/tests/test_layers_utils.py b/spaces/declare-lab/tango/diffusers/tests/test_layers_utils.py deleted file mode 100644 index d0e2102b539eed99d2a3c0910c1c7d2d9def4c6f..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/test_layers_utils.py +++ /dev/null @@ -1,586 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import unittest - -import numpy as np -import torch -from torch import nn - -from diffusers.models.attention import GEGLU, AdaLayerNorm, ApproximateGELU, AttentionBlock -from diffusers.models.embeddings import get_timestep_embedding -from diffusers.models.resnet import Downsample2D, ResnetBlock2D, Upsample2D -from diffusers.models.transformer_2d import Transformer2DModel -from diffusers.utils import torch_device - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class EmbeddingsTests(unittest.TestCase): - def test_timestep_embeddings(self): - embedding_dim = 256 - timesteps = torch.arange(16) - - t1 = get_timestep_embedding(timesteps, embedding_dim) - - # first vector should always be composed only of 0's and 1's - assert (t1[0, : embedding_dim // 2] - 0).abs().sum() < 1e-5 - assert (t1[0, embedding_dim // 2 :] - 1).abs().sum() < 1e-5 - - # last element of each vector should be one - assert (t1[:, -1] - 1).abs().sum() < 1e-5 - - # For large embeddings (e.g. 128) the frequency of every vector is higher - # than the previous one which means that the gradients of later vectors are - # ALWAYS higher than the previous ones - grad_mean = np.abs(np.gradient(t1, axis=-1)).mean(axis=1) - - prev_grad = 0.0 - for grad in grad_mean: - assert grad > prev_grad - prev_grad = grad - - def test_timestep_defaults(self): - embedding_dim = 16 - timesteps = torch.arange(10) - - t1 = get_timestep_embedding(timesteps, embedding_dim) - t2 = get_timestep_embedding( - timesteps, embedding_dim, flip_sin_to_cos=False, downscale_freq_shift=1, max_period=10_000 - ) - - assert torch.allclose(t1.cpu(), t2.cpu(), 1e-3) - - def test_timestep_flip_sin_cos(self): - embedding_dim = 16 - timesteps = torch.arange(10) - - t1 = get_timestep_embedding(timesteps, embedding_dim, flip_sin_to_cos=True) - t1 = torch.cat([t1[:, embedding_dim // 2 :], t1[:, : embedding_dim // 2]], dim=-1) - - t2 = get_timestep_embedding(timesteps, embedding_dim, flip_sin_to_cos=False) - - assert torch.allclose(t1.cpu(), t2.cpu(), 1e-3) - - def test_timestep_downscale_freq_shift(self): - embedding_dim = 16 - timesteps = torch.arange(10) - - t1 = get_timestep_embedding(timesteps, embedding_dim, downscale_freq_shift=0) - t2 = get_timestep_embedding(timesteps, embedding_dim, downscale_freq_shift=1) - - # get cosine half (vectors that are wrapped into cosine) - cosine_half = (t1 - t2)[:, embedding_dim // 2 :] - - # cosine needs to be negative - assert (np.abs((cosine_half <= 0).numpy()) - 1).sum() < 1e-5 - - def test_sinoid_embeddings_hardcoded(self): - embedding_dim = 64 - timesteps = torch.arange(128) - - # standard unet, score_vde - t1 = get_timestep_embedding(timesteps, embedding_dim, downscale_freq_shift=1, flip_sin_to_cos=False) - # glide, ldm - t2 = get_timestep_embedding(timesteps, embedding_dim, downscale_freq_shift=0, flip_sin_to_cos=True) - # grad-tts - t3 = get_timestep_embedding(timesteps, embedding_dim, scale=1000) - - assert torch.allclose( - t1[23:26, 47:50].flatten().cpu(), - torch.tensor([0.9646, 0.9804, 0.9892, 0.9615, 0.9787, 0.9882, 0.9582, 0.9769, 0.9872]), - 1e-3, - ) - assert torch.allclose( - t2[23:26, 47:50].flatten().cpu(), - torch.tensor([0.3019, 0.2280, 0.1716, 0.3146, 0.2377, 0.1790, 0.3272, 0.2474, 0.1864]), - 1e-3, - ) - assert torch.allclose( - t3[23:26, 47:50].flatten().cpu(), - torch.tensor([-0.9801, -0.9464, -0.9349, -0.3952, 0.8887, -0.9709, 0.5299, -0.2853, -0.9927]), - 1e-3, - ) - - -class Upsample2DBlockTests(unittest.TestCase): - def test_upsample_default(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 32, 32) - upsample = Upsample2D(channels=32, use_conv=False) - with torch.no_grad(): - upsampled = upsample(sample) - - assert upsampled.shape == (1, 32, 64, 64) - output_slice = upsampled[0, -1, -3:, -3:] - expected_slice = torch.tensor([-0.2173, -1.2079, -1.2079, 0.2952, 1.1254, 1.1254, 0.2952, 1.1254, 1.1254]) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_upsample_with_conv(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 32, 32) - upsample = Upsample2D(channels=32, use_conv=True) - with torch.no_grad(): - upsampled = upsample(sample) - - assert upsampled.shape == (1, 32, 64, 64) - output_slice = upsampled[0, -1, -3:, -3:] - expected_slice = torch.tensor([0.7145, 1.3773, 0.3492, 0.8448, 1.0839, -0.3341, 0.5956, 0.1250, -0.4841]) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_upsample_with_conv_out_dim(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 32, 32) - upsample = Upsample2D(channels=32, use_conv=True, out_channels=64) - with torch.no_grad(): - upsampled = upsample(sample) - - assert upsampled.shape == (1, 64, 64, 64) - output_slice = upsampled[0, -1, -3:, -3:] - expected_slice = torch.tensor([0.2703, 0.1656, -0.2538, -0.0553, -0.2984, 0.1044, 0.1155, 0.2579, 0.7755]) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_upsample_with_transpose(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 32, 32) - upsample = Upsample2D(channels=32, use_conv=False, use_conv_transpose=True) - with torch.no_grad(): - upsampled = upsample(sample) - - assert upsampled.shape == (1, 32, 64, 64) - output_slice = upsampled[0, -1, -3:, -3:] - expected_slice = torch.tensor([-0.3028, -0.1582, 0.0071, 0.0350, -0.4799, -0.1139, 0.1056, -0.1153, -0.1046]) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - -class Downsample2DBlockTests(unittest.TestCase): - def test_downsample_default(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64) - downsample = Downsample2D(channels=32, use_conv=False) - with torch.no_grad(): - downsampled = downsample(sample) - - assert downsampled.shape == (1, 32, 32, 32) - output_slice = downsampled[0, -1, -3:, -3:] - expected_slice = torch.tensor([-0.0513, -0.3889, 0.0640, 0.0836, -0.5460, -0.0341, -0.0169, -0.6967, 0.1179]) - max_diff = (output_slice.flatten() - expected_slice).abs().sum().item() - assert max_diff <= 1e-3 - # assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-1) - - def test_downsample_with_conv(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64) - downsample = Downsample2D(channels=32, use_conv=True) - with torch.no_grad(): - downsampled = downsample(sample) - - assert downsampled.shape == (1, 32, 32, 32) - output_slice = downsampled[0, -1, -3:, -3:] - - expected_slice = torch.tensor( - [0.9267, 0.5878, 0.3337, 1.2321, -0.1191, -0.3984, -0.7532, -0.0715, -0.3913], - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_downsample_with_conv_pad1(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64) - downsample = Downsample2D(channels=32, use_conv=True, padding=1) - with torch.no_grad(): - downsampled = downsample(sample) - - assert downsampled.shape == (1, 32, 32, 32) - output_slice = downsampled[0, -1, -3:, -3:] - expected_slice = torch.tensor([0.9267, 0.5878, 0.3337, 1.2321, -0.1191, -0.3984, -0.7532, -0.0715, -0.3913]) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_downsample_with_conv_out_dim(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64) - downsample = Downsample2D(channels=32, use_conv=True, out_channels=16) - with torch.no_grad(): - downsampled = downsample(sample) - - assert downsampled.shape == (1, 16, 32, 32) - output_slice = downsampled[0, -1, -3:, -3:] - expected_slice = torch.tensor([-0.6586, 0.5985, 0.0721, 0.1256, -0.1492, 0.4436, -0.2544, 0.5021, 1.1522]) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - -class ResnetBlock2DTests(unittest.TestCase): - def test_resnet_default(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64).to(torch_device) - temb = torch.randn(1, 128).to(torch_device) - resnet_block = ResnetBlock2D(in_channels=32, temb_channels=128).to(torch_device) - with torch.no_grad(): - output_tensor = resnet_block(sample, temb) - - assert output_tensor.shape == (1, 32, 64, 64) - output_slice = output_tensor[0, -1, -3:, -3:] - expected_slice = torch.tensor( - [-1.9010, -0.2974, -0.8245, -1.3533, 0.8742, -0.9645, -2.0584, 1.3387, -0.4746], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_restnet_with_use_in_shortcut(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64).to(torch_device) - temb = torch.randn(1, 128).to(torch_device) - resnet_block = ResnetBlock2D(in_channels=32, temb_channels=128, use_in_shortcut=True).to(torch_device) - with torch.no_grad(): - output_tensor = resnet_block(sample, temb) - - assert output_tensor.shape == (1, 32, 64, 64) - output_slice = output_tensor[0, -1, -3:, -3:] - expected_slice = torch.tensor( - [0.2226, -1.0791, -0.1629, 0.3659, -0.2889, -1.2376, 0.0582, 0.9206, 0.0044], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_resnet_up(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64).to(torch_device) - temb = torch.randn(1, 128).to(torch_device) - resnet_block = ResnetBlock2D(in_channels=32, temb_channels=128, up=True).to(torch_device) - with torch.no_grad(): - output_tensor = resnet_block(sample, temb) - - assert output_tensor.shape == (1, 32, 128, 128) - output_slice = output_tensor[0, -1, -3:, -3:] - expected_slice = torch.tensor( - [1.2130, -0.8753, -0.9027, 1.5783, -0.5362, -0.5001, 1.0726, -0.7732, -0.4182], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_resnet_down(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64).to(torch_device) - temb = torch.randn(1, 128).to(torch_device) - resnet_block = ResnetBlock2D(in_channels=32, temb_channels=128, down=True).to(torch_device) - with torch.no_grad(): - output_tensor = resnet_block(sample, temb) - - assert output_tensor.shape == (1, 32, 32, 32) - output_slice = output_tensor[0, -1, -3:, -3:] - expected_slice = torch.tensor( - [-0.3002, -0.7135, 0.1359, 0.0561, -0.7935, 0.0113, -0.1766, -0.6714, -0.0436], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_restnet_with_kernel_fir(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64).to(torch_device) - temb = torch.randn(1, 128).to(torch_device) - resnet_block = ResnetBlock2D(in_channels=32, temb_channels=128, kernel="fir", down=True).to(torch_device) - with torch.no_grad(): - output_tensor = resnet_block(sample, temb) - - assert output_tensor.shape == (1, 32, 32, 32) - output_slice = output_tensor[0, -1, -3:, -3:] - expected_slice = torch.tensor( - [-0.0934, -0.5729, 0.0909, -0.2710, -0.5044, 0.0243, -0.0665, -0.5267, -0.3136], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_restnet_with_kernel_sde_vp(self): - torch.manual_seed(0) - sample = torch.randn(1, 32, 64, 64).to(torch_device) - temb = torch.randn(1, 128).to(torch_device) - resnet_block = ResnetBlock2D(in_channels=32, temb_channels=128, kernel="sde_vp", down=True).to(torch_device) - with torch.no_grad(): - output_tensor = resnet_block(sample, temb) - - assert output_tensor.shape == (1, 32, 32, 32) - output_slice = output_tensor[0, -1, -3:, -3:] - expected_slice = torch.tensor( - [-0.3002, -0.7135, 0.1359, 0.0561, -0.7935, 0.0113, -0.1766, -0.6714, -0.0436], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - -class AttentionBlockTests(unittest.TestCase): - @unittest.skipIf( - torch_device == "mps", "Matmul crashes on MPS, see https://github.com/pytorch/pytorch/issues/84039" - ) - def test_attention_block_default(self): - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - sample = torch.randn(1, 32, 64, 64).to(torch_device) - attentionBlock = AttentionBlock( - channels=32, - num_head_channels=1, - rescale_output_factor=1.0, - eps=1e-6, - norm_num_groups=32, - ).to(torch_device) - with torch.no_grad(): - attention_scores = attentionBlock(sample) - - assert attention_scores.shape == (1, 32, 64, 64) - output_slice = attention_scores[0, -1, -3:, -3:] - - expected_slice = torch.tensor( - [-1.4975, -0.0038, -0.7847, -1.4567, 1.1220, -0.8962, -1.7394, 1.1319, -0.5427], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_attention_block_sd(self): - # This version uses SD params and is compatible with mps - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - sample = torch.randn(1, 512, 64, 64).to(torch_device) - attentionBlock = AttentionBlock( - channels=512, - rescale_output_factor=1.0, - eps=1e-6, - norm_num_groups=32, - ).to(torch_device) - with torch.no_grad(): - attention_scores = attentionBlock(sample) - - assert attention_scores.shape == (1, 512, 64, 64) - output_slice = attention_scores[0, -1, -3:, -3:] - - expected_slice = torch.tensor( - [-0.6621, -0.0156, -3.2766, 0.8025, -0.8609, 0.2820, 0.0905, -1.1179, -3.2126], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - -class Transformer2DModelTests(unittest.TestCase): - def test_spatial_transformer_default(self): - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - sample = torch.randn(1, 32, 64, 64).to(torch_device) - spatial_transformer_block = Transformer2DModel( - in_channels=32, - num_attention_heads=1, - attention_head_dim=32, - dropout=0.0, - cross_attention_dim=None, - ).to(torch_device) - with torch.no_grad(): - attention_scores = spatial_transformer_block(sample).sample - - assert attention_scores.shape == (1, 32, 64, 64) - output_slice = attention_scores[0, -1, -3:, -3:] - - expected_slice = torch.tensor( - [-1.9455, -0.0066, -1.3933, -1.5878, 0.5325, -0.6486, -1.8648, 0.7515, -0.9689], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_spatial_transformer_cross_attention_dim(self): - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - sample = torch.randn(1, 64, 64, 64).to(torch_device) - spatial_transformer_block = Transformer2DModel( - in_channels=64, - num_attention_heads=2, - attention_head_dim=32, - dropout=0.0, - cross_attention_dim=64, - ).to(torch_device) - with torch.no_grad(): - context = torch.randn(1, 4, 64).to(torch_device) - attention_scores = spatial_transformer_block(sample, context).sample - - assert attention_scores.shape == (1, 64, 64, 64) - output_slice = attention_scores[0, -1, -3:, -3:] - - expected_slice = torch.tensor( - [-0.2555, -0.8877, -2.4739, -2.2251, 1.2714, 0.0807, -0.4161, -1.6408, -0.0471], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_spatial_transformer_timestep(self): - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - num_embeds_ada_norm = 5 - - sample = torch.randn(1, 64, 64, 64).to(torch_device) - spatial_transformer_block = Transformer2DModel( - in_channels=64, - num_attention_heads=2, - attention_head_dim=32, - dropout=0.0, - cross_attention_dim=64, - num_embeds_ada_norm=num_embeds_ada_norm, - ).to(torch_device) - with torch.no_grad(): - timestep_1 = torch.tensor(1, dtype=torch.long).to(torch_device) - timestep_2 = torch.tensor(2, dtype=torch.long).to(torch_device) - attention_scores_1 = spatial_transformer_block(sample, timestep=timestep_1).sample - attention_scores_2 = spatial_transformer_block(sample, timestep=timestep_2).sample - - assert attention_scores_1.shape == (1, 64, 64, 64) - assert attention_scores_2.shape == (1, 64, 64, 64) - - output_slice_1 = attention_scores_1[0, -1, -3:, -3:] - output_slice_2 = attention_scores_2[0, -1, -3:, -3:] - - expected_slice_1 = torch.tensor( - [-0.1874, -0.9704, -1.4290, -1.3357, 1.5138, 0.3036, -0.0976, -1.1667, 0.1283], device=torch_device - ) - expected_slice_2 = torch.tensor( - [-0.3493, -1.0924, -1.6161, -1.5016, 1.4245, 0.1367, -0.2526, -1.3109, -0.0547], device=torch_device - ) - - assert torch.allclose(output_slice_1.flatten(), expected_slice_1, atol=1e-3) - assert torch.allclose(output_slice_2.flatten(), expected_slice_2, atol=1e-3) - - def test_spatial_transformer_dropout(self): - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - sample = torch.randn(1, 32, 64, 64).to(torch_device) - spatial_transformer_block = ( - Transformer2DModel( - in_channels=32, - num_attention_heads=2, - attention_head_dim=16, - dropout=0.3, - cross_attention_dim=None, - ) - .to(torch_device) - .eval() - ) - with torch.no_grad(): - attention_scores = spatial_transformer_block(sample).sample - - assert attention_scores.shape == (1, 32, 64, 64) - output_slice = attention_scores[0, -1, -3:, -3:] - - expected_slice = torch.tensor( - [-1.9380, -0.0083, -1.3771, -1.5819, 0.5209, -0.6441, -1.8545, 0.7563, -0.9615], device=torch_device - ) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - @unittest.skipIf(torch_device == "mps", "MPS does not support float64") - def test_spatial_transformer_discrete(self): - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - num_embed = 5 - - sample = torch.randint(0, num_embed, (1, 32)).to(torch_device) - spatial_transformer_block = ( - Transformer2DModel( - num_attention_heads=1, - attention_head_dim=32, - num_vector_embeds=num_embed, - sample_size=16, - ) - .to(torch_device) - .eval() - ) - - with torch.no_grad(): - attention_scores = spatial_transformer_block(sample).sample - - assert attention_scores.shape == (1, num_embed - 1, 32) - - output_slice = attention_scores[0, -2:, -3:] - - expected_slice = torch.tensor([-1.7648, -1.0241, -2.0985, -1.8035, -1.6404, -1.2098], device=torch_device) - assert torch.allclose(output_slice.flatten(), expected_slice, atol=1e-3) - - def test_spatial_transformer_default_norm_layers(self): - spatial_transformer_block = Transformer2DModel(num_attention_heads=1, attention_head_dim=32, in_channels=32) - - assert spatial_transformer_block.transformer_blocks[0].norm1.__class__ == nn.LayerNorm - assert spatial_transformer_block.transformer_blocks[0].norm3.__class__ == nn.LayerNorm - - def test_spatial_transformer_ada_norm_layers(self): - spatial_transformer_block = Transformer2DModel( - num_attention_heads=1, - attention_head_dim=32, - in_channels=32, - num_embeds_ada_norm=5, - ) - - assert spatial_transformer_block.transformer_blocks[0].norm1.__class__ == AdaLayerNorm - assert spatial_transformer_block.transformer_blocks[0].norm3.__class__ == nn.LayerNorm - - def test_spatial_transformer_default_ff_layers(self): - spatial_transformer_block = Transformer2DModel( - num_attention_heads=1, - attention_head_dim=32, - in_channels=32, - ) - - assert spatial_transformer_block.transformer_blocks[0].ff.net[0].__class__ == GEGLU - assert spatial_transformer_block.transformer_blocks[0].ff.net[1].__class__ == nn.Dropout - assert spatial_transformer_block.transformer_blocks[0].ff.net[2].__class__ == nn.Linear - - dim = 32 - inner_dim = 128 - - # First dimension change - assert spatial_transformer_block.transformer_blocks[0].ff.net[0].proj.in_features == dim - # NOTE: inner_dim * 2 because GEGLU - assert spatial_transformer_block.transformer_blocks[0].ff.net[0].proj.out_features == inner_dim * 2 - - # Second dimension change - assert spatial_transformer_block.transformer_blocks[0].ff.net[2].in_features == inner_dim - assert spatial_transformer_block.transformer_blocks[0].ff.net[2].out_features == dim - - def test_spatial_transformer_geglu_approx_ff_layers(self): - spatial_transformer_block = Transformer2DModel( - num_attention_heads=1, - attention_head_dim=32, - in_channels=32, - activation_fn="geglu-approximate", - ) - - assert spatial_transformer_block.transformer_blocks[0].ff.net[0].__class__ == ApproximateGELU - assert spatial_transformer_block.transformer_blocks[0].ff.net[1].__class__ == nn.Dropout - assert spatial_transformer_block.transformer_blocks[0].ff.net[2].__class__ == nn.Linear - - dim = 32 - inner_dim = 128 - - # First dimension change - assert spatial_transformer_block.transformer_blocks[0].ff.net[0].proj.in_features == dim - assert spatial_transformer_block.transformer_blocks[0].ff.net[0].proj.out_features == inner_dim - - # Second dimension change - assert spatial_transformer_block.transformer_blocks[0].ff.net[2].in_features == inner_dim - assert spatial_transformer_block.transformer_blocks[0].ff.net[2].out_features == dim - - def test_spatial_transformer_attention_bias(self): - spatial_transformer_block = Transformer2DModel( - num_attention_heads=1, attention_head_dim=32, in_channels=32, attention_bias=True - ) - - assert spatial_transformer_block.transformer_blocks[0].attn1.to_q.bias is not None - assert spatial_transformer_block.transformer_blocks[0].attn1.to_k.bias is not None - assert spatial_transformer_block.transformer_blocks[0].attn1.to_v.bias is not None diff --git a/spaces/deepghs/auto_image_censor/censor.py b/spaces/deepghs/auto_image_censor/censor.py deleted file mode 100644 index ebc6f4eb7ae4e7a6376a4d38f755a9e35b34e348..0000000000000000000000000000000000000000 --- a/spaces/deepghs/auto_image_censor/censor.py +++ /dev/null @@ -1,25 +0,0 @@ -from typing import Tuple - -from PIL import Image, ImageFilter - - -def _pixelate(area: Image.Image, radius: int, **kwargs) -> Image.Image: - width, height = area.size - small = area.resize((width // radius, height // radius)) - return small.resize(area.size, Image.NEAREST) - - -def _blur(area: Image.Image, radius: int, **kwargs) -> Image.Image: - return area.filter(ImageFilter.GaussianBlur(radius)) - - -def _color(area: Image.Image, color: str = 'black', **kwargs) -> Image.Image: - return Image.new('RGB', area.size, color) - - -def censor_area(image_: Image.Image, area: Tuple[int, int, int, int], func, *args, **kwargs): - original_area = image_.crop(area) - processed_area = func(original_area, *args, **kwargs) - assert processed_area.size == original_area.size - image_.paste(processed_area, area) - return image_ diff --git a/spaces/derful/Chatgpt-academic/project_self_analysis.md b/spaces/derful/Chatgpt-academic/project_self_analysis.md deleted file mode 100644 index c8174215e644594a3fae3e33cea597fbefcb4f49..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/project_self_analysis.md +++ /dev/null @@ -1,122 +0,0 @@ -# chatgpt-academic项目分析报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方全怪GPT) - -## [0/10] 程序摘要: check_proxy.py - -这个程序是一个用来检查代理服务器是否有效的 Python 程序代码。程序文件名为 check_proxy.py。其中定义了一个函数 check_proxy,该函数接收一个代理配置信息 proxies,使用 requests 库向一个代理服务器发送请求,获取该代理的所在地信息并返回。如果请求超时或者异常,该函数将返回一个代理无效的结果。 - -程序代码分为两个部分,首先是 check_proxy 函数的定义部分,其次是程序文件的入口部分,在该部分代码中,程序从 config_private.py 文件或者 config.py 文件中加载代理配置信息,然后调用 check_proxy 函数来检测代理服务器是否有效。如果配置文件 config_private.py 存在,则会加载其中的代理配置信息,否则会从 config.py 文件中读取。 - -## [1/10] 程序摘要: config.py - -本程序文件名为config.py,主要功能是存储应用所需的常量和配置信息。 - -其中,包含了应用所需的OpenAI API密钥、API接口地址、网络代理设置、超时设置、网络端口和OpenAI模型选择等信息,在运行应用前需要进行相应的配置。在未配置网络代理时,程序给出了相应的警告提示。 - -此外,还包含了一个检查函数,用于检查是否忘记修改API密钥。 - -总之,config.py文件是应用中的一个重要配置文件,用来存储应用所需的常量和配置信息,需要在应用运行前进行相应的配置。 - -## [2/10] 程序摘要: config_private.py - -该文件是一个配置文件,命名为config_private.py。它是一个Python脚本,用于配置OpenAI的API密钥、模型和其它相关设置。该配置文件还可以设置是否使用代理。如果使用代理,需要设置代理协议、地址和端口。在设置代理之后,该文件还包括一些用于测试代理是否正常工作的代码。该文件还包括超时时间、随机端口、重试次数等设置。在文件末尾,还有一个检查代码,如果没有更改API密钥,则抛出异常。 - -## [3/10] 程序摘要: functional.py - -该程序文件名为 functional.py,其中包含一个名为 get_functionals 的函数,该函数返回一个字典,该字典包含了各种翻译、校对等功能的名称、前缀、后缀以及默认按钮颜色等信息。具体功能包括:英语学术润色、中文学术润色、查找语法错误、中英互译、中译英、学术中译英、英译中、解释代码等。该程序的作用为提供各种翻译、校对等功能的模板,以便后续程序可以直接调用。 - -(Author补充:这个文件汇总了模块化的Prompt调用,如果发现了新的好用Prompt,别藏着哦^_^速速PR) - - -## [4/10] 程序摘要: functional_crazy.py - -这个程序文件 functional_crazy.py 导入了一些 python 模块,并提供了一个函数 get_crazy_functionals(),该函数返回不同实验功能的描述和函数。其中,使用的的模块包括: - -- crazy_functions.读文章写摘要 中的 读文章写摘要 -- crazy_functions.生成函数注释 中的 批量生成函数注释 -- crazy_functions.解析项目源代码 中的 解析项目本身、解析一个Python项目、解析一个C项目的头文件、解析一个C项目 -- crazy_functions.高级功能函数模板 中的 高阶功能模板函数 - -返回的实验功能函数包括: - -- "[实验] 请解析并解构此项目本身",包含函数:解析项目本身 -- "[实验] 解析整个py项目(配合input输入框)",包含函数:解析一个Python项目 -- "[实验] 解析整个C++项目头文件(配合input输入框)",包含函数:解析一个C项目的头文件 -- "[实验] 解析整个C++项目(配合input输入框)",包含函数:解析一个C项目 -- "[实验] 读tex论文写摘要(配合input输入框)",包含函数:读文章写摘要 -- "[实验] 批量生成函数注释(配合input输入框)",包含函数:批量生成函数注释 -- "[实验] 实验功能函数模板",包含函数:高阶功能模板函数 - -这些函数用于系统开发和测试,方便开发者进行特定程序语言后台功能开发的测试和实验,增加系统可靠稳定性和用户友好性。 - -(Author补充:这个文件汇总了模块化的函数,如此设计以方便任何新功能的加入) - -## [5/10] 程序摘要: main.py - -该程序是一个基于Gradio框架的聊天机器人应用程序。用户可以通过输入问题来获取答案,并与聊天机器人进行对话。该应用程序还集成了一些实验性功能模块,用户可以通过上传本地文件或点击相关按钮来使用这些模块。程序还可以生成对话日志,并且具有一些外观上的调整。在运行时,它会自动打开一个网页并在本地启动服务器。 - - -## [6/10] 程序摘要: predict.py - -该程序文件名为predict.py,主要是针对一个基于ChatGPT的聊天机器人进行交互和预测。 - -第一部分是导入所需的库和配置文件。 - -第二部分是一个用于获取Openai返回的完整错误信息的函数。 - -第三部分是用于一次性完成向ChatGPT发送请求和等待回复的函数。 - -第四部分是用于基础的对话功能的函数,通过stream参数可以选择是否显示中间的过程。 - -第五部分是用于整合所需信息和选择LLM模型生成的HTTP请求。 - -(Author补充:主要是predict_no_ui和predict两个函数。前者不用stream,方便、高效、易用。后者用stream,展现效果好。) - -## [7/10] 程序摘要: show_math.py - -这是一个名为show_math.py的Python程序文件,主要用于将Markdown-LaTeX混合文本转换为HTML格式,并包括MathML数学公式。程序使用latex2mathml.converter库将LaTeX公式转换为MathML格式,并使用正则表达式递归地翻译输入的Markdown-LaTeX混合文本。程序包括转换成双美元符号($$)形式、转换成单美元符号($)形式、转换成\[\]形式以及转换成\(\)形式的LaTeX数学公式。如果转换中出现错误,程序将返回相应的错误消息。 - -## [8/10] 程序摘要: theme.py - -这是一个名为theme.py的程序文件,用于设置Gradio界面的颜色和字体主题。该文件中定义了一个名为adjust_theme()的函数,其作用是返回一个Gradio theme对象,设置了Gradio界面的颜色和字体主题。在该函数里面,使用了Graido可用的颜色列表,主要参数包括primary_hue、neutral_hue、font和font_mono等,用于设置Gradio界面的主题色调、字体等。另外,该函数还实现了一些参数的自定义,如input_background_fill_dark、button_transition、button_shadow_hover等,用于设置Gradio界面的渐变、阴影等特效。如果Gradio版本过于陈旧,该函数会抛出异常并返回None。 - -## [9/10] 程序摘要: toolbox.py - -该文件为Python程序文件,文件名为toolbox.py。主要功能包括: - -1. 导入markdown、mdtex2html、threading、functools等模块。 -2. 定义函数predict_no_ui_but_counting_down,用于生成对话。 -3. 定义函数write_results_to_file,用于将对话记录生成Markdown文件。 -4. 定义函数regular_txt_to_markdown,将普通文本转换为Markdown格式的文本。 -5. 定义装饰器函数CatchException,用于捕获函数执行异常并返回生成器。 -6. 定义函数report_execption,用于向chatbot中添加错误信息。 -7. 定义函数text_divide_paragraph,用于将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 -8. 定义函数markdown_convertion,用于将Markdown格式的文本转换为HTML格式。 -9. 定义函数format_io,用于将输入和输出解析为HTML格式。 -10. 定义函数find_free_port,用于返回当前系统中可用的未使用端口。 -11. 定义函数extract_archive,用于解压归档文件。 -12. 定义函数find_recent_files,用于查找最近创建的文件。 -13. 定义函数on_file_uploaded,用于处理上传文件的操作。 -14. 定义函数on_report_generated,用于处理生成报告文件的操作。 - -## 程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。 - -这是一个基于Gradio框架的聊天机器人应用,支持通过文本聊天来获取答案,并可以使用一系列实验性功能模块,例如生成函数注释、解析项目源代码、读取Latex论文写摘要等。 程序架构分为前端和后端两个部分。前端使用Gradio实现,包括用户输入区域、应答区域、按钮、调用方式等。后端使用Python实现,包括聊天机器人模型、实验性功能模块、模板模块、管理模块、主程序模块等。 - -每个程序文件的功能如下: - -| 文件名 | 功能描述 | -|:----:|:----:| -| check_proxy.py | 检查代理服务器是否有效 | -| config.py | 存储应用所需的常量和配置信息 | -| config_private.py | 存储Openai的API密钥、模型和其他相关设置 | -| functional.py | 提供各种翻译、校对等实用模板 | -| functional_crazy.py | 提供一些实验性质的高级功能 | -| main.py | 基于Gradio框架的聊天机器人应用程序的主程序 | -| predict.py | 用于chatbot预测方案创建,向ChatGPT发送请求和获取回复 | -| show_math.py | 将Markdown-LaTeX混合文本转换为HTML格式,并包括MathML数学公式 | -| theme.py | 设置Gradio界面的颜色和字体主题 | -| toolbox.py | 定义一系列工具函数,用于对输入输出进行格式转换、文件操作、异常捕捉和处理等 | - -这些程序文件共同组成了一个聊天机器人应用程序的前端和后端实现,使用户可以方便地进行聊天,并可以使用相应的实验功能模块。 - diff --git a/spaces/diacanFperku/AutoGPT/2011 Evaluacion Objetiva De Fisica Vectorial De Vallejo Zambranol.md b/spaces/diacanFperku/AutoGPT/2011 Evaluacion Objetiva De Fisica Vectorial De Vallejo Zambranol.md deleted file mode 100644 index f9ac52ebce7011dc1bd314948ae26562c5ee96c3..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/2011 Evaluacion Objetiva De Fisica Vectorial De Vallejo Zambranol.md +++ /dev/null @@ -1,11 +0,0 @@ -
            -

            https://kyle7.zendesk.com/entries/4500085-pueden-sacar-las-regiones -evaluacion-objetiva-de -fisica-vectorial-de-vallejo-zambrano.html. SOLUCIONARIO FSICA VECTORIAL 1 VALLEJO ZAMBRANO La solucin de los ejercicios. Fisica Vectorial De Vallejo Zambrano - 2011 Evaluacion Objetiva De Fisica Vectorial De Vallejo Zambrano >> f42d4e2d88 Para descargar gratuitamente el.

            -

            2011 Evaluacion Objetiva De Fisica Vectorial De Vallejo Zambranol


            Download Filehttps://gohhs.com/2uFTP5



            -

            bbb47c7e7c https://coub.com/stories/2386432-2011-evaluacion-objetiva-de-fisica-vectorial-de-vallejo-zambrano. Tutorial Trabajo 2011 Evaluacion Objetiva De Fisica Vectorial De Vallejo Zambrano Dice Vallejo - Fisica Vectorial 2011 Todos los derechos reservados.

            -

            h943de18c4 https://coub.com/stories/3784869-2011-evaluacion-objetiva-de-fisica-vectorial-de-vallejo-zambrano. Chicas Vectorialfisica vectorial 1 vallejo zambrano - 2011 Evaluacion Objetiva De Fisica Vectorial De Vallejo Zambrano.

            -

            260eba9be1 https://coub.com/stories/3650396-2011-evaluacion-objetiva-de-fisica-vectorial-de-vallejo-zambrano. Explicaciones de lenguaje en pedagogia fisica vectorial 1 vallejo Zambrano 2011 Evaluacion Objetiva de Fisica Vectorial.

            -

            4ef52da372 https://coub.com/stories/1187718-2011-evaluacion-objetiva-de-fisica-vectorial-de-vallejo-zambrano. Evaluacion Objetiva de Fisica Vectorial De Vallejo Zambrano - Todo el trabajo sera enviado en un conmutador de correos. No fisica vectorial 1 vallejo zambrano 2011 Evaluacion Objetiva.

            -

            -

            7d4c1b9dae https://coub.com/stories/2770063-2011-evaluacion-objetiva-de-fisica-vectorial-de-vallejo-zambrano. Como identificar los componentes de un vector 3rd element vectoring resulta muy exagerado.

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Antrenmanlarla Paragraf Anlam Bilgisi 25.pdf.md b/spaces/diacanFperku/AutoGPT/Antrenmanlarla Paragraf Anlam Bilgisi 25.pdf.md deleted file mode 100644 index 67dd4dd2bd8614e5713420bdf261a3b00c928bfb..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Antrenmanlarla Paragraf Anlam Bilgisi 25.pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Antrenmanlarla Paragraf Anlam Bilgisi 25.pdf


            DOWNLOAD » https://gohhs.com/2uFVqD



            - - d5da3c52bf
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/Hard DIsk Sentinel PRO 4.20 Cracked Serial Key.md b/spaces/diacanFperku/AutoGPT/Hard DIsk Sentinel PRO 4.20 Cracked Serial Key.md deleted file mode 100644 index 85e16b686f32fb35cac233a008244e09501ef291..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Hard DIsk Sentinel PRO 4.20 Cracked Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Hard DIsk Sentinel PRO 4.20 Cracked Serial Key


            Download Ziphttps://gohhs.com/2uFU0u



            -
            -Hard Disk Sentinel Pro 4.20 Build 6014 Final + Keygen FFF - [pro] torrent download locations × Hard Disk Sentinel Pro 4 20 Build 6014 Final Keygen FFF. 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/indexing/codecs/residual.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/indexing/codecs/residual.py deleted file mode 100644 index 0ecbb0f6986d79a78413cea6a7baec44e126cddb..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/indexing/codecs/residual.py +++ /dev/null @@ -1,276 +0,0 @@ -""" -EVENTUALLY: Tune the batch sizes selected here for a good balance of speed and generality. -""" - -import os -import torch -import numpy as np -from itertools import product - -from colbert.infra.config import ColBERTConfig -from colbert.indexing.codecs.residual_embeddings import ResidualEmbeddings -from colbert.utils.utils import print_message - -import pathlib -from torch.utils.cpp_extension import load - - -class ResidualCodec: - Embeddings = ResidualEmbeddings - - def __init__(self, config, centroids, avg_residual=None, bucket_cutoffs=None, bucket_weights=None): - self.use_gpu = config.total_visible_gpus > 0 - - ResidualCodec.try_load_torch_extensions(self.use_gpu) - - if self.use_gpu > 0: - self.centroids = centroids.cuda().half() - else: - self.centroids = centroids.float() - self.dim, self.nbits = config.dim, config.nbits - self.avg_residual = avg_residual - - if torch.is_tensor(self.avg_residual): - if self.use_gpu: - self.avg_residual = self.avg_residual.cuda().half() - - if torch.is_tensor(bucket_cutoffs): - if self.use_gpu: - bucket_cutoffs = bucket_cutoffs.cuda() - bucket_weights = bucket_weights.half().cuda() - - self.bucket_cutoffs = bucket_cutoffs - self.bucket_weights = bucket_weights - if not self.use_gpu and self.bucket_weights is not None: - self.bucket_weights = self.bucket_weights.to(torch.float32) - - self.arange_bits = torch.arange(0, self.nbits, device='cuda' if self.use_gpu else 'cpu', dtype=torch.uint8) - - self.rank = config.rank - - # We reverse the residual bits because arange_bits as - # currently constructed produces results with the reverse - # of the expected endianness - self.reversed_bit_map = [] - mask = (1 << self.nbits) - 1 - for i in range(256): - # The reversed byte - z = 0 - for j in range(8, 0, -self.nbits): - # Extract a subsequence of length n bits - x = (i >> (j - self.nbits)) & mask - - # Reverse the endianness of each bit subsequence (e.g. 10 -> 01) - y = 0 - for k in range(self.nbits - 1, -1, -1): - y += ((x >> (self.nbits - k - 1)) & 1) * (2 ** k) - - # Set the corresponding bits in the output byte - z |= y - if j > self.nbits: - z <<= self.nbits - self.reversed_bit_map.append(z) - self.reversed_bit_map = torch.tensor(self.reversed_bit_map).to(torch.uint8) - - # A table of all possible lookup orders into bucket_weights - # given n bits per lookup - keys_per_byte = 8 // self.nbits - if self.bucket_weights is not None: - self.decompression_lookup_table = ( - torch.tensor( - list( - product( - list(range(len(self.bucket_weights))), - repeat=keys_per_byte - ) - ) - ) - .to(torch.uint8) - ) - else: - self.decompression_lookup_table = None - if self.use_gpu: - self.reversed_bit_map = self.reversed_bit_map.cuda() - if self.decompression_lookup_table is not None: - self.decompression_lookup_table = self.decompression_lookup_table.cuda() - - @classmethod - def try_load_torch_extensions(cls, use_gpu): - if hasattr(cls, "loaded_extensions") or not use_gpu: - return - - print_message(f"Loading decompress_residuals_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...") - decompress_residuals_cpp = load( - name="decompress_residuals_cpp", - sources=[ - os.path.join( - pathlib.Path(__file__).parent.resolve(), "decompress_residuals.cpp" - ), - os.path.join( - pathlib.Path(__file__).parent.resolve(), "decompress_residuals.cu" - ), - ], - verbose=os.getenv("COLBERT_LOAD_TORCH_EXTENSION_VERBOSE", "False") == "True", - ) - cls.decompress_residuals = decompress_residuals_cpp.decompress_residuals_cpp - - print_message(f"Loading packbits_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...") - packbits_cpp = load( - name="packbits_cpp", - sources=[ - os.path.join( - pathlib.Path(__file__).parent.resolve(), "packbits.cpp" - ), - os.path.join( - pathlib.Path(__file__).parent.resolve(), "packbits.cu" - ), - ], - verbose=os.getenv("COLBERT_LOAD_TORCH_EXTENSION_VERBOSE", "False") == "True", - ) - cls.packbits = packbits_cpp.packbits_cpp - - cls.loaded_extensions = True - - @classmethod - def load(cls, index_path): - config = ColBERTConfig.load_from_index(index_path) - centroids_path = os.path.join(index_path, 'centroids.pt') - avgresidual_path = os.path.join(index_path, 'avg_residual.pt') - buckets_path = os.path.join(index_path, 'buckets.pt') - - centroids = torch.load(centroids_path, map_location='cpu') - avg_residual = torch.load(avgresidual_path, map_location='cpu') - bucket_cutoffs, bucket_weights = torch.load(buckets_path, map_location='cpu') - - if avg_residual.dim() == 0: - avg_residual = avg_residual.item() - - return cls(config=config, centroids=centroids, avg_residual=avg_residual, bucket_cutoffs=bucket_cutoffs, bucket_weights=bucket_weights) - - def save(self, index_path): - assert self.avg_residual is not None - assert torch.is_tensor(self.bucket_cutoffs), self.bucket_cutoffs - assert torch.is_tensor(self.bucket_weights), self.bucket_weights - - centroids_path = os.path.join(index_path, 'centroids.pt') - avgresidual_path = os.path.join(index_path, 'avg_residual.pt') - buckets_path = os.path.join(index_path, 'buckets.pt') - - torch.save(self.centroids.half(), centroids_path) - torch.save((self.bucket_cutoffs, self.bucket_weights), buckets_path) - - if torch.is_tensor(self.avg_residual): - torch.save(self.avg_residual, avgresidual_path) - else: - torch.save(torch.tensor([self.avg_residual]), avgresidual_path) - - def compress(self, embs): - codes, residuals = [], [] - - for batch in embs.split(1 << 18): - if self.use_gpu: - batch = batch.cuda().half() - codes_ = self.compress_into_codes(batch, out_device=batch.device) - centroids_ = self.lookup_centroids(codes_, out_device=batch.device) - - residuals_ = (batch - centroids_) - - codes.append(codes_.cpu()) - residuals.append(self.binarize(residuals_).cpu()) - - codes = torch.cat(codes) - residuals = torch.cat(residuals) - - return ResidualCodec.Embeddings(codes, residuals) - - def binarize(self, residuals): - residuals = torch.bucketize(residuals.float(), self.bucket_cutoffs).to(dtype=torch.uint8) - residuals = residuals.unsqueeze(-1).expand(*residuals.size(), self.nbits) # add a new nbits-wide dim - residuals = residuals >> self.arange_bits # divide by 2^bit for each bit position - residuals = residuals & 1 # apply mod 2 to binarize - - assert self.dim % 8 == 0 - assert self.dim % (self.nbits * 8) == 0, (self.dim, self.nbits) - - if self.use_gpu: - residuals_packed = ResidualCodec.packbits(residuals.contiguous().flatten()) - else: - residuals_packed = np.packbits(np.asarray(residuals.contiguous().flatten())) - residuals_packed = torch.as_tensor(residuals_packed, dtype=torch.uint8) - residuals_packed = residuals_packed.reshape(residuals.size(0), self.dim // 8 * self.nbits) - - return residuals_packed - - def compress_into_codes(self, embs, out_device): - """ - EVENTUALLY: Fusing the kernels or otherwise avoiding materalizing the entire matrix before max(dim=0) - seems like it would help here a lot. - """ - - codes = [] - - bsize = (1 << 29) // self.centroids.size(0) - for batch in embs.split(bsize): - if self.use_gpu: - indices = (self.centroids @ batch.T.cuda().half()).max(dim=0).indices.to(device=out_device) - else: - indices = (self.centroids @ batch.T.cpu().float()).max(dim=0).indices.to(device=out_device) - codes.append(indices) - - return torch.cat(codes) - - def lookup_centroids(self, codes, out_device): - """ - Handles multi-dimensional codes too. - - EVENTUALLY: The .split() below should happen on a flat view. - """ - - centroids = [] - - for batch in codes.split(1 << 20): - if self.use_gpu: - centroids.append(self.centroids[batch.cuda().long()].to(device=out_device)) - else: - centroids.append(self.centroids[batch.long()].to(device=out_device)) - - return torch.cat(centroids) - - #@profile - def decompress(self, compressed_embs: Embeddings): - """ - We batch below even if the target device is CUDA to avoid large temporary buffers causing OOM. - """ - - codes, residuals = compressed_embs.codes, compressed_embs.residuals - - D = [] - for codes_, residuals_ in zip(codes.split(1 << 15), residuals.split(1 << 15)): - if self.use_gpu: - codes_, residuals_ = codes_.cuda(), residuals_.cuda() - centroids_ = ResidualCodec.decompress_residuals( - residuals_, - self.bucket_weights, - self.reversed_bit_map, - self.decompression_lookup_table, - codes_, - self.centroids, - self.dim, - self.nbits, - ).cuda() - else: - # TODO: Remove dead code - centroids_ = self.lookup_centroids(codes_, out_device='cpu') - residuals_ = self.reversed_bit_map[residuals_.long()] - residuals_ = self.decompression_lookup_table[residuals_.long()] - residuals_ = residuals_.reshape(residuals_.shape[0], -1) - residuals_ = self.bucket_weights[residuals_.long()] - centroids_.add_(residuals_) - - if self.use_gpu: - D_ = torch.nn.functional.normalize(centroids_, p=2, dim=-1).half() - else: - D_ = torch.nn.functional.normalize(centroids_.to(torch.float32), p=2, dim=-1) - D.append(D_) - - return torch.cat(D) diff --git a/spaces/diagaiwei/ir_chinese_medqa/utility/utils/dpr.py b/spaces/diagaiwei/ir_chinese_medqa/utility/utils/dpr.py deleted file mode 100644 index f268d85bedb0484e9143bd271fd5b6d7c2283f28..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/utility/utils/dpr.py +++ /dev/null @@ -1,237 +0,0 @@ -""" - Source: DPR Implementation from Facebook Research - https://github.com/facebookresearch/DPR/tree/master/dpr -""" - -import string -import spacy -import regex -import unicodedata - - -class Tokens(object): - """A class to represent a list of tokenized text.""" - TEXT = 0 - TEXT_WS = 1 - SPAN = 2 - POS = 3 - LEMMA = 4 - NER = 5 - - def __init__(self, data, annotators, opts=None): - self.data = data - self.annotators = annotators - self.opts = opts or {} - - def __len__(self): - """The number of tokens.""" - return len(self.data) - - def slice(self, i=None, j=None): - """Return a view of the list of tokens from [i, j).""" - new_tokens = copy.copy(self) - new_tokens.data = self.data[i: j] - return new_tokens - - def untokenize(self): - """Returns the original text (with whitespace reinserted).""" - return ''.join([t[self.TEXT_WS] for t in self.data]).strip() - - def words(self, uncased=False): - """Returns a list of the text of each token - - Args: - uncased: lower cases text - """ - if uncased: - return [t[self.TEXT].lower() for t in self.data] - else: - return [t[self.TEXT] for t in self.data] - - def offsets(self): - """Returns a list of [start, end) character offsets of each token.""" - return [t[self.SPAN] for t in self.data] - - def pos(self): - """Returns a list of part-of-speech tags of each token. - Returns None if this annotation was not included. - """ - if 'pos' not in self.annotators: - return None - return [t[self.POS] for t in self.data] - - def lemmas(self): - """Returns a list of the lemmatized text of each token. - Returns None if this annotation was not included. - """ - if 'lemma' not in self.annotators: - return None - return [t[self.LEMMA] for t in self.data] - - def entities(self): - """Returns a list of named-entity-recognition tags of each token. - Returns None if this annotation was not included. - """ - if 'ner' not in self.annotators: - return None - return [t[self.NER] for t in self.data] - - def ngrams(self, n=1, uncased=False, filter_fn=None, as_strings=True): - """Returns a list of all ngrams from length 1 to n. - - Args: - n: upper limit of ngram length - uncased: lower cases text - filter_fn: user function that takes in an ngram list and returns - True or False to keep or not keep the ngram - as_string: return the ngram as a string vs list - """ - - def _skip(gram): - if not filter_fn: - return False - return filter_fn(gram) - - words = self.words(uncased) - ngrams = [(s, e + 1) - for s in range(len(words)) - for e in range(s, min(s + n, len(words))) - if not _skip(words[s:e + 1])] - - # Concatenate into strings - if as_strings: - ngrams = ['{}'.format(' '.join(words[s:e])) for (s, e) in ngrams] - - return ngrams - - def entity_groups(self): - """Group consecutive entity tokens with the same NER tag.""" - entities = self.entities() - if not entities: - return None - non_ent = self.opts.get('non_ent', 'O') - groups = [] - idx = 0 - while idx < len(entities): - ner_tag = entities[idx] - # Check for entity tag - if ner_tag != non_ent: - # Chomp the sequence - start = idx - while (idx < len(entities) and entities[idx] == ner_tag): - idx += 1 - groups.append((self.slice(start, idx).untokenize(), ner_tag)) - else: - idx += 1 - return groups - - -class Tokenizer(object): - """Base tokenizer class. - Tokenizers implement tokenize, which should return a Tokens class. - """ - - def tokenize(self, text): - raise NotImplementedError - - def shutdown(self): - pass - - def __del__(self): - self.shutdown() - - -class SimpleTokenizer(Tokenizer): - ALPHA_NUM = r'[\p{L}\p{N}\p{M}]+' - NON_WS = r'[^\p{Z}\p{C}]' - - def __init__(self, **kwargs): - """ - Args: - annotators: None or empty set (only tokenizes). - """ - self._regexp = regex.compile( - '(%s)|(%s)' % (self.ALPHA_NUM, self.NON_WS), - flags=regex.IGNORECASE + regex.UNICODE + regex.MULTILINE - ) - if len(kwargs.get('annotators', {})) > 0: - logger.warning('%s only tokenizes! Skipping annotators: %s' % - (type(self).__name__, kwargs.get('annotators'))) - self.annotators = set() - - def tokenize(self, text): - data = [] - matches = [m for m in self._regexp.finditer(text)] - for i in range(len(matches)): - # Get text - token = matches[i].group() - - # Get whitespace - span = matches[i].span() - start_ws = span[0] - if i + 1 < len(matches): - end_ws = matches[i + 1].span()[0] - else: - end_ws = span[1] - - # Format data - data.append(( - token, - text[start_ws: end_ws], - span, - )) - return Tokens(data, self.annotators) - - -def has_answer(tokenized_answers, text): - text = DPR_normalize(text) - - for single_answer in tokenized_answers: - for i in range(0, len(text) - len(single_answer) + 1): - if single_answer == text[i: i + len(single_answer)]: - return True - - return False - - -def locate_answers(tokenized_answers, text): - """ - Returns each occurrence of an answer as (offset, endpos) in terms of *characters*. - """ - tokenized_text = DPR_tokenize(text) - occurrences = [] - - text_words, text_word_positions = tokenized_text.words(uncased=True), tokenized_text.offsets() - answers_words = [ans.words(uncased=True) for ans in tokenized_answers] - - for single_answer in answers_words: - for i in range(0, len(text_words) - len(single_answer) + 1): - if single_answer == text_words[i: i + len(single_answer)]: - (offset, _), (_, endpos) = text_word_positions[i], text_word_positions[i+len(single_answer)-1] - occurrences.append((offset, endpos)) - - return occurrences - - -STokenizer = SimpleTokenizer() - - -def DPR_tokenize(text): - return STokenizer.tokenize(unicodedata.normalize('NFD', text)) - - -def DPR_normalize(text): - return DPR_tokenize(text).words(uncased=True) - - -# Source: https://github.com/shmsw25/qa-hard-em/blob/master/prepro_util.py -def strip_accents(text): - """Strips accents from a piece of text.""" - text = unicodedata.normalize("NFD", text) - output = [] - for char in text: - cat = unicodedata.category(char) - if cat == "Mn": - continue - output.append(char) - return "".join(output) diff --git a/spaces/dilums/sentence-similarity/components/layout/ThemeToggle/index.tsx b/spaces/dilums/sentence-similarity/components/layout/ThemeToggle/index.tsx deleted file mode 100644 index 3623b695308ec872aad07efe1e84415710d074ed..0000000000000000000000000000000000000000 --- a/spaces/dilums/sentence-similarity/components/layout/ThemeToggle/index.tsx +++ /dev/null @@ -1,3 +0,0 @@ -import ThemeToggle from './ThemeToggle'; - -export default ThemeToggle; \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/base_roi_head.py b/spaces/dineshreddy/WALT/mmdet/models/roi_heads/base_roi_head.py deleted file mode 100644 index 3b560a913a52f7e2d6ee5a8f85589fa07e92118b..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/base_roi_head.py +++ /dev/null @@ -1,114 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import torch.nn as nn - -from ..builder import build_shared_head - - -class BaseRoIHead(nn.Module, metaclass=ABCMeta): - """Base class for RoIHeads.""" - - def __init__(self, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - gan_roi_extractor=None, - gan_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None): - super(BaseRoIHead, self).__init__() - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if shared_head is not None: - self.shared_head = build_shared_head(shared_head) - - if bbox_head is not None: - self.init_bbox_head(bbox_roi_extractor, bbox_head) - - if mask_head is not None: - self.init_mask_head(mask_roi_extractor, mask_head) - - if gan_head is not None: - self.init_gan_head(mask_roi_extractor, mask_head) - - self.init_assigner_sampler() - - @property - def with_bbox(self): - """bool: whether the RoI head contains a `bbox_head`""" - return hasattr(self, 'bbox_head') and self.bbox_head is not None - - @property - def with_mask(self): - """bool: whether the RoI head contains a `mask_head`""" - return hasattr(self, 'mask_head') and self.mask_head is not None - - @property - def with_shared_head(self): - """bool: whether the RoI head contains a `shared_head`""" - return hasattr(self, 'shared_head') and self.shared_head is not None - - @abstractmethod - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - pass - - @abstractmethod - def init_bbox_head(self): - """Initialize ``bbox_head``""" - pass - - @abstractmethod - def init_mask_head(self): - """Initialize ``mask_head``""" - pass - - @abstractmethod - def init_gan_head(self): - """Initialize ``gan_head``""" - pass - - - @abstractmethod - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - @abstractmethod - def forward_train(self, - x, - img_meta, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - **kwargs): - """Forward function during training.""" - - async def async_simple_test(self, x, img_meta, **kwargs): - """Asynchronized test function.""" - raise NotImplementedError - - def simple_test(self, - x, - proposal_list, - img_meta, - proposals=None, - rescale=False, - **kwargs): - """Test without augmentation.""" - - def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ diff --git a/spaces/dineshreddy/WALT/walt/datasets/mask.py b/spaces/dineshreddy/WALT/walt/datasets/mask.py deleted file mode 100644 index cb7b2bcd0f74f48f8eb0cb249334dc9095138976..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/walt/datasets/mask.py +++ /dev/null @@ -1,110 +0,0 @@ -__author__ = 'tsungyi' - -import pycocotools._mask as _mask - -# Interface for manipulating masks stored in RLE format. -# -# RLE is a simple yet efficient format for storing binary masks. RLE -# first divides a vector (or vectorized image) into a series of piecewise -# constant regions and then for each piece simply stores the length of -# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would -# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1] -# (note that the odd counts are always the numbers of zeros). Instead of -# storing the counts directly, additional compression is achieved with a -# variable bitrate representation based on a common scheme called LEB128. -# -# Compression is greatest given large piecewise constant regions. -# Specifically, the size of the RLE is proportional to the number of -# *boundaries* in M (or for an image the number of boundaries in the y -# direction). Assuming fairly simple shapes, the RLE representation is -# O(sqrt(n)) where n is number of pixels in the object. Hence space usage -# is substantially lower, especially for large simple objects (large n). -# -# Many common operations on masks can be computed directly using the RLE -# (without need for decoding). This includes computations such as area, -# union, intersection, etc. All of these operations are linear in the -# size of the RLE, in other words they are O(sqrt(n)) where n is the area -# of the object. Computing these operations on the original mask is O(n). -# Thus, using the RLE can result in substantial computational savings. -# -# The following API functions are defined: -# encode - Encode binary masks using RLE. -# decode - Decode binary masks encoded via RLE. -# merge - Compute union or intersection of encoded masks. -# iou - Compute intersection over union between masks. -# area - Compute area of encoded masks. -# toBbox - Get bounding boxes surrounding encoded masks. -# frPyObjects - Convert polygon, bbox, and uncompressed RLE to encoded -# RLE mask. -# -# Usage: -# Rs = encode( masks ) -# masks = decode( Rs ) -# R = merge( Rs, intersect=false ) -# o = iou( dt, gt, iscrowd ) -# a = area( Rs ) -# bbs = toBbox( Rs ) -# Rs = frPyObjects( [pyObjects], h, w ) -# -# In the API the following formats are used: -# Rs - [dict] Run-length encoding of binary masks -# R - dict Run-length encoding of binary mask -# masks - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) -# in column-major order) -# iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has -# crowd region to ignore -# bbs - [nx4] Bounding box(es) stored as [x y w h] -# poly - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list) -# dt,gt - May be either bounding boxes or encoded masks -# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel). -# -# Finally, a note about the intersection over union (iou) computation. -# The standard iou of a ground truth (gt) and detected (dt) object is -# iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt)) -# For "crowd" regions, we use a modified criteria. If a gt object is -# marked as "iscrowd", we allow a dt to match any subregion of the gt. -# Choosing gt' in the crowd gt that best matches the dt can be done using -# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing -# iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt) -# For crowd gt regions we use this modified criteria above for the iou. -# -# To compile run "python setup.py build_ext --inplace" -# Please do not contact us for help with compiling. -# -# Microsoft COCO Toolbox. version 2.0 -# Data, paper, and tutorials available at: http://mscoco.org/ -# Code written by Piotr Dollar and Tsung-Yi Lin, 2015. -# Licensed under the Simplified BSD License [see coco/license.txt] - -iou = _mask.iou -merge = _mask.merge -frPyObjects = _mask.frPyObjects - - -def encode(bimask): - if len(bimask.shape) == 3: - return _mask.encode(bimask) - elif len(bimask.shape) == 2: - h, w = bimask.shape - return _mask.encode(bimask.reshape((h, w, 1), order='F'))[0] - - -def decode(rleObjs): - if type(rleObjs) == list: - return _mask.decode(rleObjs) - else: - return _mask.decode([rleObjs])[:, :, 0] - - -def area(rleObjs): - if type(rleObjs) == list: - return _mask.area(rleObjs) - else: - return _mask.area([rleObjs])[0] - - -def toBbox(rleObjs): - if type(rleObjs) == list: - return _mask.toBbox(rleObjs) - else: - return _mask.toBbox([rleObjs])[0] diff --git a/spaces/dirge/voicevox/build_util/process_voicevox_resource.bash b/spaces/dirge/voicevox/build_util/process_voicevox_resource.bash deleted file mode 100644 index 1b0cfe285e8e092296ec728a328385f8b91b3378..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/build_util/process_voicevox_resource.bash +++ /dev/null @@ -1,26 +0,0 @@ -set -eux - -if [ ! -v DOWNLOAD_RESOURCE_PATH ]; then - echo "DOWNLOAD_RESOURCE_PATHが未定義です" - exit 1 -fi - -rm -r speaker_info -cp -r $DOWNLOAD_RESOURCE_PATH/character_info speaker_info - -python $DOWNLOAD_RESOURCE_PATH/scripts/clean_character_info.py \ - --character_info_dir speaker_info/ - -# マニフェスト -jq -s '.[0] * .[1]' engine_manifest.json $DOWNLOAD_RESOURCE_PATH/engine/engine_manifest.json \ - > engine_manifest.json.tmp -mv engine_manifest.json.tmp engine_manifest.json - -python build_util/merge_update_infos.py \ - engine_manifest_assets/update_infos.json \ - $DOWNLOAD_RESOURCE_PATH/engine/engine_manifest_assets/update_infos.json \ - engine_manifest_assets/update_infos.json - -for f in $(ls $DOWNLOAD_RESOURCE_PATH/engine/engine_manifest_assets/* | grep -v update_infos.json); do - cp $f ./engine_manifest_assets/ -done diff --git a/spaces/divyahansg/text-generation-webui-space/modules/deepspeed_parameters.py b/spaces/divyahansg/text-generation-webui-space/modules/deepspeed_parameters.py deleted file mode 100644 index 3dbed437f5b5196d0b1fcbc582085319fb8d40d1..0000000000000000000000000000000000000000 --- a/spaces/divyahansg/text-generation-webui-space/modules/deepspeed_parameters.py +++ /dev/null @@ -1,75 +0,0 @@ -def generate_ds_config(ds_bf16, train_batch_size, nvme_offload_dir): - - ''' - DeepSpeed configration - https://huggingface.co/docs/transformers/main_classes/deepspeed - ''' - - if nvme_offload_dir: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "nvme", - "nvme_path": nvme_offload_dir, - "pin_memory": True, - "buffer_count": 5, - "buffer_size": 1e9, - "max_in_cpu": 1e9 - }, - "overlap_comm": True, - "reduce_bucket_size": "auto", - "contiguous_gradients": True, - "sub_group_size": 1e8, - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "aio": { - "block_size": 262144, - "queue_depth": 32, - "thread_count": 1, - "single_submit": False, - "overlap_events": True - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - else: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "cpu", - "pin_memory": True - }, - "overlap_comm": True, - "contiguous_gradients": True, - "reduce_bucket_size": "auto", - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - - return ds_config diff --git a/spaces/dmeck/RVC-Speakers/vits/monotonic_align/__init__.py b/spaces/dmeck/RVC-Speakers/vits/monotonic_align/__init__.py deleted file mode 100644 index ffffd2426e11721d9b4b334a485552a395ccfdba..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/vits/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from vits.monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/dolceschokolade/chatbot-mini/pages/api/home/home.context.tsx b/spaces/dolceschokolade/chatbot-mini/pages/api/home/home.context.tsx deleted file mode 100644 index be00de03828d0cc84a129522446c6e3de6dbab1f..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/pages/api/home/home.context.tsx +++ /dev/null @@ -1,27 +0,0 @@ -import { Dispatch, createContext } from 'react'; - -import { ActionType } from '@/hooks/useCreateReducer'; - -import { Conversation } from '@/types/chat'; -import { KeyValuePair } from '@/types/data'; -import { FolderType } from '@/types/folder'; - -import { HomeInitialState } from './home.state'; - -export interface HomeContextProps { - state: HomeInitialState; - dispatch: Dispatch>; - handleNewConversation: () => void; - handleCreateFolder: (name: string, type: FolderType) => void; - handleDeleteFolder: (folderId: string) => void; - handleUpdateFolder: (folderId: string, name: string) => void; - handleSelectConversation: (conversation: Conversation) => void; - handleUpdateConversation: ( - conversation: Conversation, - data: KeyValuePair, - ) => void; -} - -const HomeContext = createContext(undefined!); - -export default HomeContext; diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/llava/README.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/llava/README.md deleted file mode 100644 index 287162efef3ab7a047bef9d5cb37c16871703fd4..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/llava/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# LLaVA - -## Description -Adds [LLaVA 13B](https://github.com/haotian-liu/LLaVA) multimodality support to text-generation-webui. - -https://user-images.githubusercontent.com/3718215/233817203-69b57e77-0c55-4fd6-b742-3204bb13b8fc.mp4 - -## LLaVA-7B -7B version currently isn't supported. It will be supported if/when [more generic multimodality support](https://github.com/oobabooga/text-generation-webui/discussions/1687) gets implemented. - -## Usage -To run this extension, download LLaVA weights, for example from [here](https://huggingface.co/wojtab/llava-13b-v0-4bit-128g) (note: it's a 4-bit [GPTQ quantization](https://github.com/oobabooga/text-generation-webui/tree/main/docs/GPTQ-models-(4-bit-mode).md), done on "old CUDA" branch), and then start server.py with `--extensions llava` argument. - -Do note, that each image takes up 258 tokens, so adjust max_new_tokens to be at most 1700 (recommended value is between 200 to 500), so the images don't get truncated. - -To send an image, just upload it to the extension field below chat, and send a prompt as always. The image will be added to the end of your message. If you wish to modify the placement, include a string `` in your prompt. - -Additionally, there is *Embed all images, not only the last one* checkbox. It modifies the image embeddings, by default (if it's unchecked), all but the most recent images have their embeddings empty, so they are not fed to the network. From initial testing, it seems as LLaVA considers the features in all images at the same time, so by default the extension skips previous images. If you want to include them anyway, just tick this checkbox. - -## Extension config -This extension uses following parameters (from settings.json): -|Parameter|Description| -|---------|-----------| -|`llava-clip_bits`|Number of bits to load CLIP feature extractor in (either 32 or 16, default=32)| -|`llava-clip_device`|Torch device to run the extractor on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| -|`llava-clip_repo`|Huggingface repository of CLIP model, `openai/clip-vit-large-patch14` by default. There should be no need to change it| -|`llava-projector_bits`|Number of bits to load CLIP->LLaMA feature projector in (either 32 or 16, default=32)| -|`llava-projector_device`|Torch device to run the CLIP->LLaMA feature projector on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| -|`llava-projector_repo`|Huggingface repository of multimodal projector, `liuhaotian/LLaVA-13b-delta-v0` by default. There should be no need to change it| -|`llava-projector_filename`|The filename of multimodal projector weights, `mm_projector.bin` by default. There should be no need to change it| -|`llava-add_all_images_to_prompt`|Default value of "Embed all images, not only the last one" checkbox| -## Technical description - -### Original LLaVA -The default LLaVA implementation uses modified `transformers` library, however this extension forgoes this requirement. The transformers are modified in LLaVA in such a way, that the entire LLaVA model gets loaded, and the inference now looks as follows: -``` -images --> CLIP --> projector --> input embeddings for images --> | - | --> LLaMA -prompt -------------------------> input embeddings for text ----> | -``` -The images are represented in the prompt by the following token IDs: -- 32000 - `` - placeholder token for embeddings from projector -- 32001 - `` - token marking start of an image -- 32002 - `` - token marking end of an image - -By default, image will be represented as `*256`. The input embeddings for an image are converted with a single linear layer of the projector, then they are placed instead of `` tokens. -The concatenated prompt then gets fed to fine-tuned LLaMA. - -### In this extension - -Using default transformers, they only load the LLaMA part of LLaVA, ignoring the added projector weights, and not loading CLIP. We then reconstruct the `images -> CLIP -> projector` pipeline ourselves, then concatenate the input embeddings, and feed it to LLaMA loaded by transformers. This allows us to use normal flow from webui to load this model, and just hijack the model input with additional features. -Splitting it to 3 separate models, allows us to configure each of them, and to move them to different devices(for example we can run CLIP+projector on CPU and LLaMA on GPU). Also, it enables us to use 4-bit GPTQ quantization for LLaVA, massively cutting down the VRAM requirement (it should be possible to fit on 12GB of VRAM with full context size by moving CLIP and projector to CPU). - -### Usage through API - -You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f''`, where `img_str` is base-64 jpeg data. Python example: -```Python -import base64 -import requests - -CONTEXT = "You are LLaVA, a large language and vision assistant trained by UW Madison WAIV Lab. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. Follow the instructions carefully and explain your answers in detail.\n### Human: \nHi!\n### Assistant: \nHi there! How can I help you today?\n" - -with open('extreme_ironing.jpg', 'rb') as f: - img_str = base64.b64encode(f.read()).decode('utf-8') - prompt = CONTEXT + f'### Human: \nWhat is unusual about this image: \n\n### Assistant: \n' - print(requests.post('http://127.0.0.1:5000/api/v1/generate', json={'prompt': prompt, 'stopping_strings': ['\n###']}).json()) -``` -script output: -```Python -{'results': [{'text': "The unusual aspect of this image is that a man is standing on top of a yellow minivan while doing his laundry. He has set up a makeshift clothes line using the car's rooftop as an outdoor drying area. This scene is uncommon because people typically do their laundry indoors, in a dedicated space like a laundromat or a room in their home, rather than on top of a moving vehicle. Additionally, hanging clothes on the car could be potentially hazardous or illegal in some jurisdictions due to the risk of damaging the vehicle or causing accidents on the road.\n##"}]} -``` \ No newline at end of file diff --git a/spaces/eatcosmos/hackaprompt/README.md b/spaces/eatcosmos/hackaprompt/README.md deleted file mode 100644 index 567589db4ffcb497e2aef9d060bb8ffd16cef3b6..0000000000000000000000000000000000000000 --- a/spaces/eatcosmos/hackaprompt/README.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: hackaprompt -sdk: gradio -app_file: hackaprompt/gradio_app.py -duplicated_from: jerpint-org/hackaprompt ---- -# Hackaprompt - -Code for hosting and evaluating the hackaprompt competition. - -## Installation - -Clone the repository - - cd && git clone https://github.com/jerpint/hackaprompt/ - -Create a python environment with `python>=3.9`, then: - - cd ~/hackaprompt - pip install -e . - -## Gradio App - -To run the gradio app: - - cd ~/hackaprompt/hackprompt && gradio gradio_app.py - - -## Evaluation - - cd ~/hackaprompt/hackaprompt && python score_submission.py - - -## Deployment on HF Space - -To deploy on HuggingFace space, first, create a space. Then: - - git remote add space https://huggingface.co/spaces/jerpint/hackaprompt - git push --force space main - -## Secrets - -### MongoDB - -To enable logging to MongoDB, you need to add the following env. variables to your environment: - - export HACKAPROMPT_MONGODB_USERNAME=... - export HACKAPROMPT_MONGODB_PASSWORD=... - export HACKAPROMPT_MONGODB_CLUSTER=... - export HACKAPROMPT_MONGODB_DB_NAME=... - - -### Flan endpoint - -The Flan model is hosted on a private space exclusively for this competition. To use it, it needs to have the valid hf token associated to it to authenticate: - - export HUB_TOKEN=hf_... - -### OpenAI - -To run tests and evaluations, a valid openai api key should be set as an env. variable: - - export OPENAI_API_KEY=sk-... diff --git a/spaces/elplaguister/Yuuka_TTS/src/utils.py b/spaces/elplaguister/Yuuka_TTS/src/utils.py deleted file mode 100644 index 9794e0fc3463a5e8fad05c037cce64683059a6d3..0000000000000000000000000000000000000000 --- a/spaces/elplaguister/Yuuka_TTS/src/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/main/hooks/useCollections.ts b/spaces/enzostvs/stable-diffusion-tpu/components/main/hooks/useCollections.ts deleted file mode 100644 index 5fc1a114985298d22c46ea935807569ba1d9c6f5..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/main/hooks/useCollections.ts +++ /dev/null @@ -1,89 +0,0 @@ -import { useState } from "react"; -import { useQuery, useQueryClient } from "@tanstack/react-query"; -import { useUpdateEffect } from "react-use"; -import _ from "lodash"; - -import { useUser } from "@/utils/useUser"; - -export const useCollections = (category: string) => { - const [loading, setLoading] = useState(false); - const { user, loading: userLoading, token } = useUser(); - - const client = useQueryClient(); - - const { - data, - isFetching, - refetch, - } = useQuery( - ["collections"], - async () => { - const queryParams = new URLSearchParams(); - if (category === 'my-own') { - queryParams.append('userId', user?.sub); - } - queryParams.append('page', '0'); - - const response = await fetch(`/api/collections?${queryParams.toString()}`, { - headers: { - ...(user?.sub ? { 'Authorization': token } : {}) - }, - method: "GET", - }) - - const data = await response.json() - - if (!response.ok) { - throw new Error(data.message) - } - return { - images: data?.collections, - pagination: data?.pagination, - }; - }, - { - enabled: !userLoading, - refetchOnMount: false, - refetchOnWindowFocus: false, - refetchOnReconnect: false, - } - ); - - const infiniteRefetch = async () => { - setLoading(true); - const queryParams = new URLSearchParams(); - if (category === 'my-own') { - queryParams.append('userId', user?.sub); - } - queryParams.append('page', data?.pagination?.page + 1); - - const response = await fetch(`/api/collections?${queryParams.toString()}`, { - headers: { - ...(user?.sub ? { 'Authorization': token } : {}) - }, - method: "GET", - }) - - const d = await response.json() - if (d.ok) { - const images = _.concat(data?.images, d?.collections); - client.setQueryData(["collections"], { - images, - pagination: d?.pagination, - }); - } - setLoading(false); - }; - - useUpdateEffect(() => { - refetch() - }, [category]); - - return { - images: data?.images, - loading: isFetching, - infiniteLoading: loading, - infiniteRefetch, - pagination: data?.pagination, - } -}; \ No newline at end of file diff --git a/spaces/epexVfeibi/Imagedeblurr/!NEW! Crack Autocad 2016 X64 (64bit) Product Key.md b/spaces/epexVfeibi/Imagedeblurr/!NEW! Crack Autocad 2016 X64 (64bit) Product Key.md deleted file mode 100644 index fdee8cf9d68ea4b57f882ebbda8b6ae649e6ff7c..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/!NEW! Crack Autocad 2016 X64 (64bit) Product Key.md +++ /dev/null @@ -1,37 +0,0 @@ -
            -

            How to Find and Activate Your Autocad 2016 x64 (64bit) Product Key

            -

            If you have purchased or downloaded Autodesk Autocad 2016 x64 (64bit), you will need a product key to activate it. A product key is a unique code that identifies your software license and allows you to use all the features of Autocad 2016. Without a product key, you will only be able to use Autocad 2016 in trial mode, which has limited functionality and expires after 30 days.

            -

            There are different ways to find and activate your Autocad 2016 x64 (64bit) product key, depending on how you obtained your software. Here are some common scenarios and the steps to follow:

            -

            CRACK Autocad 2016 x64 (64bit) Product key


            Download File ✶✶✶ https://jinyurl.com/2uEo1z



            -

            If you bought Autocad 2016 x64 (64bit) from an authorized reseller

            -

            If you bought a physical copy of Autocad 2016 x64 (64bit) from an authorized reseller, such as Microsol Resources[^1^] or Cadline Community[^2^], you should have received a product key with your purchase. The product key is usually printed on a label or card inside the product packaging. You can also find it on your receipt or invoice.

            -

            To activate your product key, follow these steps:

            -

            -
              -
            1. Install Autocad 2016 x64 (64bit) on your computer using the installation media or download link provided by the reseller.
            2. -
            3. Run Autocad 2016 x64 (64bit) and enter your serial number and product key when prompted. The serial number is also printed on the product packaging or receipt. The product key for Autocad 2016 x64 (64bit) is 001H1.
            4. -
            5. Follow the on-screen instructions to complete the activation process.
            6. -
            -

            If you downloaded Autocad 2016 x64 (64bit) from Autodesk website

            -

            If you downloaded Autocad 2016 x64 (64bit) from Autodesk website, either as a free trial or as part of a subscription plan, you will need to sign in with your Autodesk account to activate it. Your Autodesk account is the email address and password that you used to register or purchase your software online.

            -

            To activate your software, follow these steps:

            -
              -
            1. Install Autocad 2016 x64 (64bit) on your computer using the download link provided by Autodesk.
            2. -
            3. Run Autocad 2016 x64 (64bit) and sign in with your Autodesk account when prompted.
            4. -
            5. Select "I have an activation code from Autodesk" and click "Next".
            6. -
            7. Copy the request code that appears on the screen and go to https://register.autodesk.com.
            8. -
            9. Paste the request code into the box and click "Generate". You will receive an activation code that matches your request code.
            10. -
            11. Copy the activation code and paste it into the activation screen of Autocad 2016 x64 (64bit).
            12. -
            13. Click "Next" and follow the on-screen instructions to complete the activation process.
            14. -
            -

            If you downloaded Autocad 2016 x64 (64bit) from a third-party website

            -

            If you downloaded Autocad 2016 x64 (64bit) from a third-party website, such as Wixsite[^3^] or Archive.org[^4^], you may have obtained a cracked version of the software that bypasses the activation process. However, this is not recommended for several reasons:

            -
              -
            • It is illegal to use cracked software and it may violate Autodesk's terms of service and license agreement.
            • -
            • It may expose your computer to malware, viruses, or other security risks that can damage your data or system.
            • -
            • It may not work properly or have missing or corrupted features that can affect your productivity and quality of work.
            • -
            • It may not be compatible with updates, patches, or other Autodesk products and services that can enhance your user experience.
            • -
            -

            Therefore, we strongly advise you to delete any cracked version of Autocad 2016 x64 (

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/erbanku/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex b/spaces/erbanku/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex deleted file mode 100644 index 1baa8915f4cf7aec2520894a87470fc9436d954b..0000000000000000000000000000000000000000 --- a/spaces/erbanku/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex +++ /dev/null @@ -1,18 +0,0 @@ -Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. - -Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. -%\marginpar{not sure if the memory constraints are understandable here} -Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. - -%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away} - -Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network. - -%\marginpar{not sure if "cross-positional communication" is understandable without explanation} -%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?} - -In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. -%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.} - -% Just a standard paragraph with citations, rewrite. -%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do. \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/glm_chinese/wordpiece.py b/spaces/eson/tokenizer-arena/vocab/glm_chinese/wordpiece.py deleted file mode 100644 index 47b32f1511ed6a8adfdc88364b33ce6fc69d4a3c..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/glm_chinese/wordpiece.py +++ /dev/null @@ -1,390 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes. Provided as is from https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization.py""" - -from __future__ import absolute_import, division, print_function, unicode_literals - -import collections -import logging -import os -import unicodedata -from io import open - -from .file_utils import cached_path - -logger = logging.getLogger(__name__) - -PRETRAINED_VOCAB_ARCHIVE_MAP = { - 'bert-base-uncased': ".pytorch_pretrained_bert/bert-base-uncased-vocab.txt", - 'bert-large-uncased': ".pytorch_pretrained_bert/bert-large-uncased-vocab.txt", - 'bert-base-cased': ".pytorch_pretrained_bert/bert-base-cased-vocab.txt", - 'bert-large-cased': ".pytorch_pretrained_bert/bert-large-cased-vocab.txt", - 'bert-base-multilingual-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-vocab.txt", - 'bert-base-multilingual-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt", - 'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt", -} -PRETRAINED_VOCAB_POSITIONAL_EMBEDDINGS_SIZE_MAP = { - 'bert-base-uncased': 512, - 'bert-large-uncased': 512, - 'bert-base-cased': 512, - 'bert-large-cased': 512, - 'bert-base-multilingual-uncased': 512, - 'bert-base-multilingual-cased': 512, - 'bert-base-chinese': 512, -} -VOCAB_NAME = 'vocab.txt' - - -def load_vocab(vocab_file): - """Loads a vocabulary file into a dictionary.""" - vocab = collections.OrderedDict() - index = 0 - with open(vocab_file, "r", encoding="utf-8") as reader: - while True: - token = reader.readline() - if not token: - break - token = token.strip() - vocab[token] = index - index += 1 - return vocab - - -def whitespace_tokenize(text): - """Runs basic whitespace cleaning and splitting on a piece of text.""" - text = text.strip() - if not text: - return [] - tokens = text.split() - return tokens - - -class BertTokenizer(object): - """Runs end-to-end tokenization: punctuation splitting + wordpiece""" - - def __init__(self, vocab_file, do_lower_case=True, max_len=None, do_basic_tokenize=True, - never_split=("[UNK]", "[SEP]", "[PAD]", "[CLS]", "[MASK]")): - """Constructs a BertTokenizer. - - Args: - vocab_file: Path to a one-wordpiece-per-line vocabulary file - do_lower_case: Whether to lower case the input - Only has an effect when do_wordpiece_only=False - do_basic_tokenize: Whether to do basic tokenization before wordpiece. - max_len: An artificial maximum length to truncate tokenized sequences to; - Effective maximum length is always the minimum of this - value (if specified) and the underlying BERT model's - sequence length. - never_split: List of tokens which will never be split during tokenization. - Only has an effect when do_wordpiece_only=False - """ - if not os.path.isfile(vocab_file): - raise ValueError( - "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " - "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) - self.vocab = load_vocab(vocab_file) - self.ids_to_tokens = collections.OrderedDict( - [(ids, tok) for tok, ids in self.vocab.items()]) - self.do_basic_tokenize = do_basic_tokenize - if do_basic_tokenize: - self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case, - never_split=never_split) - self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab) - self.max_len = max_len if max_len is not None else int(1e12) - - def tokenize(self, text): - if self.do_basic_tokenize: - split_tokens = [] - for token in self.basic_tokenizer.tokenize(text): - for sub_token in self.wordpiece_tokenizer.tokenize(token): - split_tokens.append(sub_token) - else: - split_tokens = self.wordpiece_tokenizer.tokenize(text) - return split_tokens - - def convert_tokens_to_ids(self, tokens): - """Converts a sequence of tokens into ids using the vocab.""" - ids = [] - for token in tokens: - ids.append(self.vocab[token]) - if len(ids) > self.max_len: - logger.warning( - "Token indices sequence length is longer than the specified maximum " - " sequence length for this BERT model ({} > {}). Running this" - " sequence through BERT will result in indexing errors".format(len(ids), self.max_len) - ) - return ids - - def convert_ids_to_tokens(self, ids): - """Converts a sequence of ids in wordpiece tokens using the vocab.""" - tokens = [] - for i in ids: - tokens.append(self.ids_to_tokens[i]) - return tokens - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, cache_dir=None, *inputs, **kwargs): - """ - Instantiate a PreTrainedBertModel from a pre-trained model file. - Download and cache the pre-trained model file if needed. - """ - if pretrained_model_name_or_path in PRETRAINED_VOCAB_ARCHIVE_MAP: - vocab_file = PRETRAINED_VOCAB_ARCHIVE_MAP[pretrained_model_name_or_path] - else: - vocab_file = pretrained_model_name_or_path - if os.path.isdir(vocab_file): - vocab_file = os.path.join(vocab_file, VOCAB_NAME) - # redirect to the cache, if necessary - try: - resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir) - except EnvironmentError: - logger.error( - "Model name '{}' was not found in model name list ({}). " - "We assumed '{}' was a path or url but couldn't find any file " - "associated to this path or url.".format( - pretrained_model_name_or_path, - ', '.join(PRETRAINED_VOCAB_ARCHIVE_MAP.keys()), - vocab_file)) - return None - if resolved_vocab_file == vocab_file: - logger.info("loading vocabulary file {}".format(vocab_file)) - else: - logger.info("loading vocabulary file {} from cache at {}".format( - vocab_file, resolved_vocab_file)) - if pretrained_model_name_or_path in PRETRAINED_VOCAB_POSITIONAL_EMBEDDINGS_SIZE_MAP: - # if we're using a pretrained model, ensure the tokenizer wont index sequences longer - # than the number of positional embeddings - max_len = PRETRAINED_VOCAB_POSITIONAL_EMBEDDINGS_SIZE_MAP[pretrained_model_name_or_path] - kwargs['max_len'] = min(kwargs.get('max_len', int(1e12)), max_len) - # Instantiate tokenizer. - tokenizer = cls(resolved_vocab_file, *inputs, **kwargs) - return tokenizer - - -class BasicTokenizer(object): - """Runs basic tokenization (punctuation splitting, lower casing, etc.).""" - - def __init__(self, - do_lower_case=True, - never_split=("[UNK]", "[SEP]", "[PAD]", "[CLS]", "[MASK]")): - """Constructs a BasicTokenizer. - - Args: - do_lower_case: Whether to lower case the input. - """ - self.do_lower_case = do_lower_case - self.never_split = never_split - - def tokenize(self, text): - """Tokenizes a piece of text.""" - text = self._clean_text(text) - # This was added on November 1st, 2018 for the multilingual and Chinese - # models. This is also applied to the English models now, but it doesn't - # matter since the English models were not trained on any Chinese data - # and generally don't have any Chinese data in them (there are Chinese - # characters in the vocabulary because Wikipedia does have some Chinese - # words in the English Wikipedia.). - text = self._tokenize_chinese_chars(text) - orig_tokens = whitespace_tokenize(text) - split_tokens = [] - for token in orig_tokens: - if self.do_lower_case and token not in self.never_split: - token = token.lower() - token = self._run_strip_accents(token) - split_tokens.extend(self._run_split_on_punc(token)) - - output_tokens = whitespace_tokenize(" ".join(split_tokens)) - return output_tokens - - def _run_strip_accents(self, text): - """Strips accents from a piece of text.""" - text = unicodedata.normalize("NFD", text) - output = [] - for char in text: - cat = unicodedata.category(char) - if cat == "Mn": - continue - output.append(char) - return "".join(output) - - def _run_split_on_punc(self, text): - """Splits punctuation on a piece of text.""" - if text in self.never_split: - return [text] - chars = list(text) - i = 0 - start_new_word = True - output = [] - while i < len(chars): - char = chars[i] - if _is_punctuation(char): - output.append([char]) - start_new_word = True - else: - if start_new_word: - output.append([]) - start_new_word = False - output[-1].append(char) - i += 1 - - return ["".join(x) for x in output] - - def _tokenize_chinese_chars(self, text): - """Adds whitespace around any CJK character.""" - output = [] - for char in text: - cp = ord(char) - if self._is_chinese_char(cp): - output.append(" ") - output.append(char) - output.append(" ") - else: - output.append(char) - return "".join(output) - - def _is_chinese_char(self, cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ((cp >= 0x4E00 and cp <= 0x9FFF) or # - (cp >= 0x3400 and cp <= 0x4DBF) or # - (cp >= 0x20000 and cp <= 0x2A6DF) or # - (cp >= 0x2A700 and cp <= 0x2B73F) or # - (cp >= 0x2B740 and cp <= 0x2B81F) or # - (cp >= 0x2B820 and cp <= 0x2CEAF) or - (cp >= 0xF900 and cp <= 0xFAFF) or # - (cp >= 0x2F800 and cp <= 0x2FA1F)): # - return True - - return False - - def _clean_text(self, text): - """Performs invalid character removal and whitespace cleanup on text.""" - output = [] - for char in text: - cp = ord(char) - if cp == 0 or cp == 0xfffd or _is_control(char): - continue - if _is_whitespace(char): - output.append(" ") - else: - output.append(char) - return "".join(output) - - -class WordpieceTokenizer(object): - """Runs WordPiece tokenization.""" - - def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=100): - self.vocab = vocab - self.unk_token = unk_token - self.max_input_chars_per_word = max_input_chars_per_word - - def tokenize(self, text): - """Tokenizes a piece of text into its word pieces. - - This uses a greedy longest-match-first algorithm to perform tokenization - using the given vocabulary. - - For example: - input = "unaffable" - output = ["un", "##aff", "##able"] - - Args: - text: A single token or whitespace separated tokens. This should have - already been passed through `BasicTokenizer`. - - Returns: - A list of wordpiece tokens. - """ - - output_tokens = [] - for token in whitespace_tokenize(text): - chars = list(token) - if len(chars) > self.max_input_chars_per_word: - output_tokens.append(self.unk_token) - continue - - is_bad = False - start = 0 - sub_tokens = [] - while start < len(chars): - end = len(chars) - cur_substr = None - while start < end: - substr = "".join(chars[start:end]) - if start > 0: - substr = "##" + substr - if substr in self.vocab: - cur_substr = substr - break - end -= 1 - if cur_substr is None: - is_bad = True - break - sub_tokens.append(cur_substr) - start = end - - if is_bad: - output_tokens.append(self.unk_token) - else: - output_tokens.extend(sub_tokens) - return output_tokens - - -def _is_whitespace(char): - """Checks whether `chars` is a whitespace character.""" - # \t, \n, and \r are technically contorl characters but we treat them - # as whitespace since they are generally considered as such. - if char == " " or char == "\t" or char == "\n" or char == "\r": - return True - cat = unicodedata.category(char) - if cat == "Zs": - return True - return False - - -def _is_control(char): - """Checks whether `chars` is a control character.""" - # These are technically control characters but we count them as whitespace - # characters. - if char == "\t" or char == "\n" or char == "\r": - return False - cat = unicodedata.category(char) - if cat.startswith("C"): - return True - return False - - -def _is_punctuation(char): - """Checks whether `chars` is a punctuation character.""" - cp = ord(char) - # We treat all non-letter/number ASCII as punctuation. - # Characters such as "^", "$", and "`" are not in the Unicode - # Punctuation class but we treat them as punctuation anyways, for - # consistency. - if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or - (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)): - return True - cat = unicodedata.category(char) - if cat.startswith("P"): - return True - return False \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Assassins Creed Brotherhood Pc Skidrow Crack 238.md b/spaces/falterWliame/Face_Mask_Detection/Assassins Creed Brotherhood Pc Skidrow Crack 238.md deleted file mode 100644 index b0e44c13698c013963f82d4695fba358935a2eb2..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Assassins Creed Brotherhood Pc Skidrow Crack 238.md +++ /dev/null @@ -1,18 +0,0 @@ -

            Assassins Creed Brotherhood Pc Skidrow Crack 238


            DOWNLOAD ⚹⚹⚹ https://urlca.com/2uDdrQ



            - -,086 1,837.05 162,248 1,845.88 1.98 - -KID ROW, INC. - -(NEW) All page is;ine own goali../:_.. l_i K.j -r, J..i(r;(_.,_. I I I 4,- - ' - -M D. '- f t : : < -: A -1... T :'1 J, - -ISD'Uf.. ;................ - -...'J : : :.................. - -1'illllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll 4fefd39f24
            -
            -
            -

            diff --git a/spaces/fatiXbelha/sd/APK Creator Build Your Own Android App in Minutes Without Coding.md b/spaces/fatiXbelha/sd/APK Creator Build Your Own Android App in Minutes Without Coding.md deleted file mode 100644 index cc99a383a9b9bb313de9d043c94b84222f307960..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/APK Creator Build Your Own Android App in Minutes Without Coding.md +++ /dev/null @@ -1,138 +0,0 @@ -
            -

            APK Creator: How to Make Your Own Android App

            -

            If you have ever used an Android device, you have probably encountered an apk file at some point. An apk file, which stands for Android Package Kit, is the file format used by the Android operating system to distribute and install apps on mobile devices. An apk file contains all the code, resources, assets, certificates, and manifest file of an app. It is essentially a compressed archive that can be opened by any zip decompression tool.

            -

            apk creator


            DOWNLOAD - https://urllie.com/2uNDmp



            -

            Creating your own apk file can be useful for various reasons, such as developing your own app, installing apps that are not available on Google Play Store, or customizing existing apps to suit your preferences. In this article, we will show you how to create an apk file from different sources, how to edit or modify an apk file, and how to install an apk file on your Android device.

            -

            How to create an apk file from source code

            -

            If you want to create an apk file from scratch, you will need some programming skills and tools. The most common way to develop an Android app is to use Android Studio, a cross-platform IDE that comes with the Android SDK and supports Jetpack Compose, a modern UI toolkit. You can write your app code in either Java or Kotlin, two popular programming languages for Android development.

            -

            To create an apk file from source code using Android Studio, you need to follow these steps:

            -
              -
            1. Create a new project in Android Studio and choose a template for your app.
            2. -
            3. Add the features and functionalities you want for your app using the code editor and the design editor.
            4. -
            5. Test your app using the emulator or a real device connected to your computer.
            6. -
            7. Build your app by selecting Build > Build Bundle(s) / APK(s) > Build APK(s) from the menu.
            8. -
            9. Locate your generated apk file in the project folder under app > build > outputs > apk > debug.
            10. -
            -

            You can also use other tools and languages to create an apk file from source code, such as Visual Studio, C#, C++, Xamarin, or Web Views. However, Android Studio is the most popular and recommended option for beginners and professionals alike.

            -

            apk maker online
            -apk builder free
            -apk generator no coding
            -apk creator for android
            -apk maker app
            -apk builder software
            -apk generator from website
            -apk creator pro
            -apk maker for pc
            -apk builder tool
            -apk generator online free
            -apk creator download
            -apk maker software for windows
            -apk builder app for android
            -apk generator app
            -apk creator without coding
            -apk maker online free
            -apk builder for mac
            -apk generator software
            -apk creator tutorial
            -apk maker no ads
            -apk builder with firebase
            -apk generator from youtube
            -apk creator mod
            -apk maker with database
            -apk builder without ads
            -apk generator for games
            -apk creator premium
            -apk maker with admob
            -apk builder with source code
            -apk generator from pdf
            -apk creator easy to use
            -apk maker with firebase database
            -apk builder with php backend
            -apk generator from html5
            -apk creator full version
            -apk maker with push notification
            -apk builder with kotlin
            -apk generator from wordpress
            -apk creator cracked
            -apk maker with google sheets
            -apk builder with flutter
            -apk generator from instagram
            -apk creator review
            -apk maker with webview
            -apk builder with react native
            -apk generator from tiktok
            -apk creator alternative
            -apk maker with qr code scanner
            -apk builder with java

            -

            How to create an apk file from a website

            -

            If you want to create an apk file from a website, you don't need any coding skills or tools. You can use online services that let you convert any website into an app in minutes. These services are called APK maker tools and they offer various templates and customization options for your app. Some of the most popular APK maker tools are Andromo and AppsGeyser.

            -

            To create an apk file from a website using Andromo, you need to follow these steps:

            -
              -
            1. Go to Andromo's website and sign up for a free account.
            2. -
            3. Click on Create App button and choose Web Site as your app type.
            4. -
            5. Enter the URL of the website you want to convert into an app and click Next.
            6. -
            7. Choose a name and icon for your app and click Next.
            8. -
            9. Select the features and settings you want for your app and click Next.
            10. -
            11. Click on Build My App button and wait for your app to be generated.
            12. -
            13. Download your apk file or scan the QR code to install it on your device.
            14. -
            -

            To create an apk file from a website using AppsGeyser, To edit or modify an apk file using APK Editor Pro, you need to follow these steps:

            -
              -
            1. Download and install APK Editor Pro from Google Play Store or a trusted third-party source on your device.
            2. -
            3. Open the app and grant it the required permissions.
            4. -
            5. Select the apk file you want to edit from your device storage or SD card.
            6. -
            7. Choose the option you want to edit from the menu, such as Common Edit, Full Edit, Simple Edit, or XML File Edit.
            8. -
            9. Make the changes you want and save them.
            10. -
            11. Install the modified apk file on your device or share it with others.
            12. -
            -

            Both APK Explorer & Editor and APK Editor Pro are powerful and easy-to-use tools that can help you customize any apk file. However, you should be careful when editing an apk file, as you may damage the app functionality or security. You should also respect the app developer's rights and not use these tools for illegal or unethical purposes.

            -

            How to install an apk file on an Android device

            -

            If you want to install an apk file on an Android device, you will need some tools that can install the apk file on your device. These tools are called APK installer tools and they allow you to install any apk file from your device storage or SD card. Some of the most popular APK installer tools are APK Installer by Uptodown and APK Installer.

            -

            To install an apk file on an Android device using APK Installer by Uptodown, you need to follow these steps:

            -
              -
            1. Download and install APK Installer by Uptodown from Google Play Store on your device.
            2. -
            3. Open the app and grant it the required permissions.
            4. -
            5. Browse and select the apk file you want to install from your device storage or SD card.
            6. -
            7. Click on Install button and wait for the installation process to complete.
            8. -
            9. Open the installed app and enjoy it.
            10. -
            -

            To install an apk file on an Android device using APK Installer, you need to follow these steps:

            -
              -
            1. Download and install APK Installer from Google Play Store on your device.
            2. -
            3. Open the app and grant it the required permissions.
            4. -
            5. Browse and select the apk file you want to install from your device storage or SD card.
            6. -
            7. Click on Install button and wait for the installation process to complete.
            8. -
            9. Open the installed app and enjoy it.
            10. -
            -

            Both APK Installer by Uptodown and APK Installer are simple and fast tools that can help you install any apk file on your device. However, you should be careful when installing an apk file, as you may expose your device to malware or viruses. You should also respect the app developer's rights and not install pirated or cracked apps.

            -

            Conclusion

            -

            In this article, we have shown you how to create an apk file from different sources, how to edit or modify an apk file, and how to install an apk file on your Android device. We hope you have learned something new and useful about apk files and apk creator tools. Creating and installing your own apk files can be fun and rewarding, as long as you do it safely and legally. If you have any questions or feedback, please feel free to leave a comment below.

            -

            FAQs

            -

            What is the difference between an apk file and an app?

            -

            An apk file is the file format used by the Android operating system to distribute and install apps on mobile devices. An app is the application that runs on your device after you install it from an apk file or another source.

            -

            How can I create an apk file without coding?

            -

            You can create an apk file without coding by using online services that let you convert any website into an app in minutes. These services are called APK maker tools and some of the most popular ones are Andromo and AppsGeyser.

            -

            How can I edit or modify an apk file without coding?

            -

            You can edit or modify an apk file without coding by using tools that can open and edit the contents of an apk file. These tools are called APK editor tools and some of the most popular ones are APK Explorer & Editor and APK Editor Pro.

            -

            How can I install an apk file on my Android device?

            -

            You can install an apk file on your Android device by using tools that can install the apk file on your device. These tools are called APK installer tools and some of the most popular ones are APK Installer by Uptodown and APK Installer.

            How can I share an apk file with others? -

            You can share an apk file with others by using tools that can generate a QR code or a link for your apk file. These tools are called APK sharer tools and some of the most popular ones are APK Share and APK Share & Backup.

            -

            To share an apk file with others using APK Share, you need to follow these steps:

            -
              -
            1. Download and install APK Share from Google Play Store on your device.
            2. -
            3. Open the app and grant it the required permissions.
            4. -
            5. Select the app you want to share from the list of installed apps on your device.
            6. -
            7. Choose the option you want to share from the menu, such as QR Code, Link, or Bluetooth.
            8. -
            9. Scan the QR code, copy the link, or pair the Bluetooth devices to share your apk file with others.
            10. -
            -

            To share an apk file with others using APK Share & Backup, you need to follow these steps:

            -
              -
            1. Download and install APK Share & Backup from Google Play Store on your device.
            2. -
            3. Open the app and grant it the required permissions.
            4. -
            5. Select the app you want to share from the list of installed apps on your device.
            6. -
            7. Choose the option you want to share from the menu, such as QR Code, Link, or Email.
            8. -
            9. Scan the QR code, copy the link, or enter the email address to share your apk file with others.
            10. -
            -

            Both APK Share and APK Share & Backup are convenient and secure tools that can help you share any apk file with others. However, you should be careful when sharing an apk file, as you may expose others to malware or viruses. You should also respect the app developer's rights and not share pirated or cracked apps.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Tekken 3 ISO - The Game That Revolutionized the Fighting Genre.md b/spaces/fatiXbelha/sd/Download Tekken 3 ISO - The Game That Revolutionized the Fighting Genre.md deleted file mode 100644 index f1ee86dc179b99fb96170e2d30ed8c2ae7279106..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Tekken 3 ISO - The Game That Revolutionized the Fighting Genre.md +++ /dev/null @@ -1,130 +0,0 @@ - -

            Download Tekken 3 ISO: How to Play the Classic Fighting Game on Your PC

            -

            Tekken 3 is one of the most popular and influential fighting games of all time. It was released in 1997 for the PlayStation console and later ported to other platforms. It has sold over 8 million copies worldwide and received critical acclaim for its graphics, gameplay, sound, and characters. If you are a fan of Tekken 3 or want to experience this classic game for yourself, you might be wondering how to download Tekken 3 ISO and play it on your PC. In this article, we will explain what Tekken 3 is, why you should download it, and how to do it step by step.

            -

            What is Tekken 3?

            -

            Tekken 3 is the third installment in the Tekken series, a fighting game franchise developed by Namco. It is set 15 years after the events of Tekken 2 and features a new generation of fighters, as well as some returning characters from previous games. It also introduces new gameplay elements, such as sidestepping, jumping, and running, that make the combat more dynamic and fluid.

            -

            download tekken 3 iso


            Download ———>>> https://urllie.com/2uNveW



            -

            The story of Tekken 3

            -

            The story of Tekken 3 revolves around a mysterious entity called Ogre, who is awakened by an ancient artifact and starts to hunt down and absorb the powers of martial artists around the world. Among them is Jun Kazama, the mother of Jin Kazama, who is trained by his grandfather Heihachi Mishima to enter the King of Iron Fist Tournament 3 and avenge his mother's disappearance. However, Jin soon learns that Heihachi has ulterior motives and that he is not the only one who seeks to confront Ogre.

            -

            The gameplay of Tekken 3

            -

            The gameplay of Tekken 3 is similar to previous games in the series, but with some improvements and additions. The game features a roster of 23 playable characters, each with their own unique fighting style, moves, and combos. The game also has several modes, such as Arcade, Versus, Team Battle, Time Attack, Survival, Practice, and Tekken Force. The latter is a side-scrolling beat 'em up mode where you fight against waves of enemies in different stages. The game also has a secret mode called Tekken Ball, where you play volleyball with a bomb-like ball using your attacks.

            -

            The characters of Tekken 3

            -

            The characters of Tekken 3 are divided into two categories: new and returning. The new characters are:

            -
              -
            • Jin Kazama: The protagonist of the game and the son of Jun Kazama and Kazuya Mishima. He has a mix of his parents' fighting styles, as well as some moves from his mentor Lei Wulong.
            • -
            • Hwoarang: A Korean taekwondo practitioner and Jin's rival. He is the student of Baek Doo San, who was defeated by Ogre.
            • -
            • Eddy Gordo: A Brazilian capoeira fighter who seeks revenge against the Mishima Zaibatsu for killing his parents.
            • -
            • Ling Xiaoyu: A Chinese martial artist and a student at Mishima High School. She enters the tournament to win a theme park from Heihachi.
            • -
            • Bryan Fury:

              A cyborg police officer who was revived by Dr. Abel, a scientist working for the Mishima Zaibatsu. He has enhanced strength and durability, but also suffers from constant pain.

            • -
            • Forest Law: A Chinese-American martial artist and the son of Marshall Law, who was injured by Ogre. He fights with the same style as his father, but with more acrobatic moves.
            • -
            • King II: A Mexican wrestler and the successor of the original King, who was killed by Ogre. He wears a jaguar mask and fights with a mix of wrestling and lucha libre techniques.
            • -
            • Kuma II: A brown bear and the son of the original Kuma, who was Heihachi's pet and bodyguard. He has inherited his father's intelligence and loyalty, as well as his fighting skills.
            • -
            • Julia Chang: An American-Chinese girl who was adopted by Michelle Chang, who was kidnapped by Ogre. She fights with the same style as her mother, but with more speed and agility.
            • -
            • Mokujin: A wooden training dummy that was brought to life by Ogre's power. It mimics the fighting style of any character it faces.
            • -
            • Ogre: The final boss of the game and the main antagonist. It is an ancient god-like being that feeds on the souls of strong fighters. It has two forms: a humanoid form and a monstrous form called True Ogre.
            • -
            -

            The returning characters are:

            -
              -
            • Heihachi Mishima: The head of the Mishima Zaibatsu and the organizer of the tournament. He is Jin's grandfather and Kazuya's father. He seeks to capture Ogre and use its power for his own gain.
            • -
            • Nina Williams: An Irish assassin who was hired by Heihachi to kill Jin. She is also the sister of Anna Williams, who is her rival.
            • -
            • Anna Williams: An Irish assassin who works for Dr. Abel. She is also the sister of Nina Williams, who is her rival.
            • -
            • Yoshimitsu: A Japanese ninja and the leader of the Manji Clan, a group of Robin Hood-like thieves. He enters the tournament to steal from the Mishima Zaibatsu and help the poor.
            • -
            • Gon: A secret character that can be unlocked by playing Tekken Ball mode or by using a cheat code. It is a small orange dinosaur that can breathe fire and fart.
            • -
            • Dr. Bosconovitch: A secret character that can be unlocked by completing Tekken Force mode four times. He is a Russian scientist and a friend of Yoshimitsu. He suffers from a disease that makes him fall down after every move.
            • -
            -

            Why download Tekken 3 ISO?

            -

            If you want to play Tekken 3 on your PC, you have two options: either buy the original PlayStation disc and use a disc drive, or download Tekken 3 ISO and use an emulator. The latter option has some advantages and disadvantages that you should consider before downloading it.

            -

            The advantages of downloading Tekken 3 ISO

            -

            Some of the benefits of downloading Tekken 3 ISO are:

            -
              -
            • You can save money and time by not having to buy or find the physical disc.
            • -
            • You can play the game in high resolution and with improved graphics, sound, and performance by using an emulator.
            • -
            • You can customize the game settings and controls to suit your preferences and hardware specifications.
            • -
            • You can use cheats, mods, hacks, and save states to enhance your gaming experience.
            • -
            -

            The disadvantages of downloading Tekken 3 ISO

            -

            Some of the drawbacks of downloading Tekken 3 ISO are:

            -
              -
            • You might encounter compatibility issues or bugs depending on your emulator and PC configuration.
            • -
            • You might face legal risks or ethical dilemmas by downloading pirated or unauthorized copies of the game.
            • -
            • You might miss out on some features or content that are exclusive to the original disc version or other platforms.
            • -
            • You might lose the nostalgic feeling or authenticity of playing the game on a PlayStation console.
            • -

            How to download Tekken 3 ISO?

            -

            If you have decided to download Tekken 3 ISO and play it on your PC, you will need to follow some steps to make it work. Here is a guide on how to do it:

            -

            How to download tekken 3 iso for pc
            -Tekken 3 iso download for android
            -Download tekken 3 iso file for epsxe
            -Tekken 3 iso free download full version
            -Best site to download tekken 3 iso
            -Download tekken 3 iso highly compressed
            -Tekken 3 iso download for windows 10
            -Download tekken 3 iso with cheats
            -Tekken 3 iso download for ps1 emulator
            -Download tekken 3 iso for psp
            -Tekken 3 iso download for pc windows 7
            -Download tekken 3 iso from google drive
            -Tekken 3 iso download for mac
            -Download tekken 3 iso with all characters unlocked
            -Tekken 3 iso download for pc offline
            -Download tekken 3 iso in hindi
            -Tekken 3 iso download for pc softonic
            -Download tekken 3 iso with bios
            -Tekken 3 iso download for pc zip file
            -Download tekken 3 iso without emulator
            -Tekken 3 iso download for pc windows 8
            -Download tekken 3 iso with sound
            -Tekken 3 iso download for linux
            -Download tekken 3 iso with mods
            -Tekken 3 iso download for pc setup
            -Download tekken 3 iso no password
            -Tekken 3 iso download for pc rar file
            -Download tekken 3 iso with music
            -Tekken 3 iso download for chromebook
            -Download tekken 3 iso with online multiplayer
            -Tekken 3 iso download for pc windows xp
            -Download tekken 3 iso with hd graphics
            -Tekken 3 iso download for raspberry pi
            -Download tekken 3 iso with english subtitles
            -Tekken 3 iso download for pc gamepad support
            -Download tekken 3 iso no survey
            -Tekken 3 iso download for pc crack file
            -Download tekken 3 iso with voice over
            -Tekken 3 iso download for firestick
            -Download tekken 3 iso with custom skins

            -

            The requirements for downloading Tekken 3 ISO

            -

            Before you download Tekken 3 ISO, you will need to have some things ready on your PC. These are:

            -
              -
            • A PlayStation emulator: This is a software that mimics the functions of a PlayStation console and allows you to run PlayStation games on your PC. There are many emulators available online, such as ePSXe, PCSX, and PSXfin. You can choose the one that suits your needs and preferences.
            • -
            • The Tekken 3 ISO file: This is a digital copy of the game disc that contains all the data and information of the game. You can download it from various websites that offer ISO files, such as CoolROM, Emuparadise, and Rom Hustler. Make sure you download a safe and reliable file that does not contain viruses or malware.
            • -
            • A file extractor: This is a software that allows you to extract or unzip compressed files, such as ZIP or RAR files. You will need this to access the contents of the Tekken 3 ISO file after downloading it. You can use any file extractor that you have on your PC, such as WinRAR, 7-Zip, or PeaZip.
            • -
            • A controller: This is optional, but recommended if you want to have a better gaming experience. You can use any controller that is compatible with your PC, such as a keyboard, a mouse, a joystick, or a gamepad. You can also use an adapter to connect a PlayStation controller to your PC.
            • -
            -

            The steps for downloading Tekken 3 ISO

            -

            Once you have all the requirements ready, you can proceed to download Tekken 3 ISO and play it on your PC. Here are the steps to follow:

            -

            Step 1: Download a PlayStation emulator

            -

            The first step is to download a PlayStation emulator from the internet. You can search for the emulator that you want to use and download it from its official website or a trusted source. For example, if you want to use ePSXe, you can go to https://www.epsxe.com/ and click on the download link for your operating system. After downloading the emulator, you will need to install it on your PC by following the instructions on the screen.

            -

            Step 2: Download the Tekken 3 ISO file

            -

            The next step is to download the Tekken 3 ISO file from the internet. You can search for the file on any website that offers ISO files and download it from there. For example, if you want to use CoolROM, you can go to https://coolrom.com/roms/psx/39780/Tekken_3.php and click on the download link for Tekken 3. After downloading the file, you will need to save it in a folder on your PC where you can easily find it later.

            -

            Step 3: Extract the files and run the emulator

            -

            The third step is to extract the files from the Tekken 3 ISO file and run the emulator on your PC. You will need to use a file extractor to unzip or decompress the file and access its contents. For example, if you use WinRAR, you can right-click on the file and select "Extract Here" or "Extract to Tekken 3". After extracting the files, you will need to run the emulator by double-clicking on its icon or shortcut. The emulator will open and show you its main menu or interface.

            Step 4: Configure the settings and controls

            -

            The fourth step is to configure the settings and controls of the emulator and the game to suit your preferences and hardware specifications. You will need to access the emulator's menu or options and adjust the settings for video, audio, input, plugins, bios, and other features. You will also need to set up the controls for the game by mapping the buttons or keys to your controller or keyboard. You can use the default settings or customize them according to your liking.

            -

            Step 5: Load the Tekken 3 ISO and enjoy the game

            -

            The final step is to load the Tekken 3 ISO and enjoy the game on your PC. You will need to go to the emulator's menu or options and select the option to load or run an ISO file. You will then need to browse your PC and locate the folder where you saved the Tekken 3 ISO file. You will then need to select the file and click on open or OK. The emulator will load the file and start the game. You can then play the game as you would on a PlayStation console, using your controller or keyboard.

            -

            Conclusion

            -

            Tekken 3 is a classic fighting game that you can download and play on your PC using an emulator and an ISO file. In this article, we have explained what Tekken 3 is, why you should download it, and how to do it step by step. We hope that this guide has helped you to download Tekken 3 ISO and enjoy this amazing game on your PC.

            -

            FAQs

            -

            Here are some frequently asked questions about downloading Tekken 3 ISO:

            -
              -
            • Q: Is downloading Tekken 3 ISO legal?
            • -
            • A: Downloading Tekken 3 ISO may not be legal in some countries or regions, depending on the copyright laws and regulations. It is your responsibility to check the legality of downloading Tekken 3 ISO before doing so.
            • -
            • Q: Is downloading Tekken 3 ISO safe?
            • -
            • A: Downloading Tekken 3 ISO may not be safe if you download it from untrusted or malicious sources. It is possible that some files may contain viruses or malware that can harm your PC or compromise your privacy. It is advisable to download Tekken 3 ISO from reputable and verified sources.
            • -
            • Q: Is downloading Tekken 3 ISO worth it?
            • -
            • A: Downloading Tekken 3 ISO may be worth it if you are a fan of Tekken 3 or want to experience this classic game for yourself. It may also be worth it if you want to play the game in high resolution and with improved graphics, sound, and performance. However, it may not be worth it if you prefer to play the game on a PlayStation console or another platform, or if you are concerned about the legal or ethical implications of downloading Tekken 3 ISO.
            • -
            • Q: How long does it take to download Tekken 3 ISO?
            • -
            • A: The time it takes to download Tekken 3 ISO depends on several factors, such as your internet speed, your PC configuration, and the size of the file. The average size of Tekken 3 ISO is about 500 MB, which means that it may take a few minutes to a few hours to download it.
            • -
            • Q: How much space does Tekken 3 ISO take on my PC?
            • -
            • A: The space that Tekken 3 ISO takes on your PC depends on how you extract or unzip the file. The average size of Tekken 3 ISO is about 500 MB, which means that it may take about 1 GB of space on your PC after extracting it.
            • -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy the Ultimate Soccer Experience with FIFA 18 APK OBB for Android.md b/spaces/fatiXbelha/sd/Enjoy the Ultimate Soccer Experience with FIFA 18 APK OBB for Android.md deleted file mode 100644 index f90415a7bdebea5b9d72586ddfe68d3a02e9b380..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy the Ultimate Soccer Experience with FIFA 18 APK OBB for Android.md +++ /dev/null @@ -1,123 +0,0 @@ - -

            FIFA 18 APK + OBB Download for Android

            -

            If you are a fan of soccer games, you must have heard of FIFA 18, one of the most popular and realistic soccer games ever made. FIFA 18 is a game developed by EA Sports and released in 2017 for various platforms, including Android. In this article, we will show you how to download and install FIFA 18 APK + OBB on your Android device, and why you should play this amazing game.

            -

            fifa 18 apk + obb download for android


            Download ❤❤❤ https://urllie.com/2uNEpe



            -

            What is FIFA 18?

            -

            FIFA 18 is the 25th installment in the FIFA series, which is based on the real-life soccer leagues and tournaments around the world. FIFA 18 features over 700 teams and 30 leagues, including the English Premier League, the Spanish La Liga, the German Bundesliga, and more. You can play as your favorite teams and players, such as Cristiano Ronaldo, Lionel Messi, Neymar, and more.

            -

            Features of FIFA 18

            -

            FIFA 18 has many features that make it stand out from other soccer games. Here are some of them:

            -

            Realistic graphics and gameplay

            -

            FIFA 18 uses the Frostbite engine, which delivers stunning graphics and animations. You can see the details of the players' faces, expressions, movements, and reactions. The gameplay is also smooth and responsive, with realistic physics and ball control. You can feel the impact of every shot, pass, tackle, and goal.

            -

            Immersive career mode and ultimate team

            -

            FIFA 18 has two modes that let you create your own soccer story. In career mode, you can choose to be a player or a manager, and lead your team to glory. You can also create your own custom player and start from the bottom to become a legend. In ultimate team mode, you can build your dream team from scratch, using players from different leagues and nations. You can also compete with other players online and earn rewards.

            -

            fifa 18 v10 apk obb latest version 2023
            -fifa 18 v10 apk obb android tv tablet pc
            -fifa 18 v10 apk obb alex patch com.fifa2018v10best.fifa18
            -fifa 18 v10 apk obb free download install xapk
            -fifa 18 v10 apk obb game for android 4.4 4.3 4.2 4.1
            -fifa 18 v10 apk obb game for android 5 6 7 8 9 10 11 12
            -fifa 18 v10 apk obb update free mobile app game
            -fifa 18 v10 apk obb download apkcombo installer app
            -fifa 18 v10 apk obb download latest version apkcombo
            -fifa 18 v10 apk obb download android apkcombo games sports
            -fifa 18 v10 apk obb download trending searches alpha ace carx street fortnite roblox mini militia madfut 23 dragon ball legends clash royale sigma clash of clans bus simulator 2023 fgo brawl stars pokemon unite clash mini project playtime nikke the goddess of victory tft
            -fifa 18 v10 apk obb download report an issue advertisement
            -fifa 18 v10 apk obb download tap install select fifa 18 v10.xapk tap ok follow the steps on screen
            -fifa 18 v10 apk obb download uploader game for android
            -fifa 18 v10 apk obb download version:1.5 alex patch com.fifa2018v10best.fifa18
            -fifa18 apk+ obb download latest version 2023 mobile app game
            -fifa18 apk+ obb download update free mobile app game
            -fifa18 apk+ obb download android apkcombo games sports
            -fifa18 apk+ obb download trending searches alpha ace carx street fortnite roblox mini militia madfut 23 dragon ball legends clash royale sigma clash of clans bus simulator 2023 fgo brawl stars pokemon unite clash mini project playtime nikke the goddess of victory tft
            -fifa18 apk+ obb download report an issue advertisement
            -fifa18 apk+ obb download tap install select fifa18.xapk tap ok follow the steps on screen
            -fifa18 apk+ obb download uploader game for android
            -fifa18 apk+ obb download version:1.5 alex patch com.fifa2018v10best.fifa18
            -fifa18 apk+ obb download free download install xapk
            -fifa18 apk+ obb download game for android 4.4 4.3 4.2 4.1
            -fifa18 apk+ obb download game for android 5 6 7 8 9 10 11 12
            -fifa18 apk+ obb download android tv tablet pc
            -fifa18 apk+ obb download latest version apkcombo installer app
            -fifa18 apk+ obb download latest version com.fifa2018v10best.fifa18
            -how to install fifa 18 v10 apk obb on android device step by step guide
            -how to install fifa18 apk+ obb on android device step by step guide
            -how to play fifa 18 v10 apk obb offline mode without internet connection tips and tricks
            -how to play fifa18 apk+ obb offline mode without internet connection tips and tricks
            -how to update fifa 18 v10 apk obb to the latest version without losing data backup and restore tutorial
            -how to update fifa18 apk+ obb to the latest version without losing data backup and restore tutorial
            -how to fix common errors and issues with fifa 18 v10 apk obb troubleshooting and solutions guide
            -how to fix common errors and issues with fifa18 apk+ obb troubleshooting and solutions guide
            -how to unlock all features and modes in fifa 18 v10 apk obb hack mod cheat unlimited coins points players kits stadiums etc.
            -how to unlock all features and modes in fifa18 apk+ obb hack mod cheat unlimited coins points players kits stadiums etc.
            -how to transfer your progress and data from fifa17 or other versions to fifa 18 v10 apk obb migration and compatibility guide

            -

            Online multiplayer and tournaments

            -

            FIFA 18 also allows you to play with or against other players online. You can join online matches and leagues, or create your own tournaments with your friends. You can also participate in special events and challenges that are updated regularly. You can also chat with other players and share your achievements.

            -

            How to download and install FIFA 18 APK + OBB on Android

            -

            If you want to play FIFA 18 on your Android device, you will need to download and install two files: the APK file and the OBB file. The APK file is the application file that lets you run the game on your device. The OBB file is the data file that contains the game's graphics, sounds, and other resources. Here are the steps to download and install FIFA 18 APK + OBB on Android:

            -

            Requirements and compatibility

            -

            Before you download and install FIFA 18 APK + OBB on Android, you need to make sure that your device meets the following requirements:

            -
              -
            • Your device must have at least 2 GB of RAM and 5 GB of free storage space.
            • -
            • Your device must run on Android version 4.4 or higher.
            • -
            • Your device must have a stable internet connection.
            • -
            -

            Steps to download and install FIFA 18 APK + OBB

            -
              -
            1. Go to this link to download FIFA 18 APK + OBB on your device. This is a trusted and safe source that provides the latest version of the game.
            2. -
            3. After the download is complete, go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install the APK file that you downloaded.
            4. -
            5. Locate the FIFA 18 APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
            6. -
            7. Do not open the game yet. You still need to copy the OBB file to the right folder on your device.
            8. -
            9. Locate the FIFA 18 OBB file on your device and extract it using a file manager app. You will get a folder named com.ea.game.fifa14_row.
            10. -
            11. Copy this folder and paste it in the Android/OBB folder on your device's internal storage. If you don't have an OBB folder, you can create one.
            12. -
            13. Now you can open the game and enjoy FIFA 18 on your Android device.
            14. -
            -

            Troubleshooting tips

            -

            If you encounter any problems while downloading or installing FIFA 18 APK + OBB on Android, here are some tips that might help:

            -
              -
            • Make sure that you have enough storage space and RAM on your device.
            • -
            • Make sure that you have a stable internet connection and avoid interruptions during the download or installation process.
            • -
            • Make sure that you download the APK and OBB files from a reliable source and scan them for viruses or malware.
            • -
            • Make sure that you follow the steps correctly and copy the OBB file to the right folder.
            • -
            • If the game does not run or crashes, try clearing the cache and data of the game or reinstalling it.
            • -
            -

            Why you should play FIFA 18 on Android

            -

            FIFA 18 is not only a fun and realistic soccer game, but also a great way to enjoy soccer on your Android device. Here are some reasons why you should play FIFA 18 on Android:

            -

            Enjoy the best soccer game on your mobile device

            -

            FIFA 18 is designed to give you the best soccer experience on your mobile device. You can play with high-quality graphics, smooth gameplay, and intuitive controls. You can also customize the game settings to suit your preferences and device performance. You can choose from different camera angles, difficulty levels, control schemes, and more.

            -

            Experience the thrill of the World Cup mode

            -

            FIFA 18 also features a special mode that lets you play in the World Cup, which is the biggest soccer tournament in the world. You can choose from 32 national teams and compete in various stages, from the group stage to the final. You can also play as legendary players and teams from past World Cups, such as Pele, Maradona, Zidane, and more.

            -

            Customize your team and players with various options

            -

            FIFA 18 also gives you a lot of options to customize your team and players. You can edit their names, appearances, attributes, skills, and more. You can also create your own kits, logos, stadiums, and banners. You can also unlock new items and rewards by playing the game and completing challenges.

            -

            Conclusion

            -

            FIFA 18 is one of the best soccer games ever made, and you can play it on your Android device by downloading and installing FIFA 18 APK + OBB. FIFA 18 has many features that make it realistic, immersive, and fun. You can play as your favorite teams and players, enjoy different modes and events, and customize your team and players with various options. FIFA 18 is a must-have game for any soccer fan who wants to enjoy soccer on their mobile device.

            -

            Summary of the article

            -

            In this article, we have covered:

            -
              -
            • What is FIFA 18 and what are its features?
            • -
            • How to download and install FIFA 18 APK + OBB on Android?
            • -
            • Why you should play FIFA 18 on Android?
            • -
            -

            FAQs

            -

            Here are some frequently asked questions about FIFA 18 APK + OBB download for Android:

            -
              -
            1. Is FIFA 18 free to play on Android?
            2. -

              Yes, FIFA 18 is free to play on Android. However, some features may require in-app purchases or online access.

              -
            3. Is FIFA 18 compatible with all Android devices?
            4. -

              No, FIFA 18 may not be compatible with some older or low-end Android devices. You need to check the requirements and compatibility of your device before downloading and installing FIFA 18 APK + OBB.

              -
            5. How can I update FIFA 18 on Android?
            6. -

              To update FIFA 18 on Android, you need to download and install the latest version of FIFA 18 APK + OBB from the same source that you downloaded the previous version. You also need to make sure that you have enough storage space and internet connection for the update.

              -
            7. How can I play FIFA 18 offline on Android?
            8. -

              To play FIFA 18 offline on Android, you need to download and install FIFA 18 APK + OBB on your device. You also need to launch the game once while online to activate it. After that, you can play some modes and features offline, such as career mode, ultimate team, and skill games. However, some modes and features may require online access, such as online multiplayer, tournaments, and events.

              -
            9. How can I fix FIFA 18 errors or bugs on Android?
            10. -

              If you encounter any errors or bugs while playing FIFA 18 on Android, you can try the following solutions:

              -
                -
              • Restart your device and launch the game again.
              • -
              • Clear the cache and data of the game or reinstall it.
              • -
              • Check your internet connection and firewall settings.
              • -
              • Contact the game's support team or visit their official website for more help.
              • -
              -
            11. Where can I find more information about FIFA 18 on Android?
            12. -

              You can find more information about FIFA 18 on Android by visiting the game's official website, social media pages, or forums. You can also watch videos, reviews, or tutorials about the game on YouTube or other platforms.

              -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/FIFA Mobile - Join the World Cup 2022 Fever with the Latest APK Update.md b/spaces/fatiXbelha/sd/FIFA Mobile - Join the World Cup 2022 Fever with the Latest APK Update.md deleted file mode 100644 index 116efbfbd33b8f7fccfd99a2432e77a6f16c0e0d..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/FIFA Mobile - Join the World Cup 2022 Fever with the Latest APK Update.md +++ /dev/null @@ -1,139 +0,0 @@ - -

            FIFA Mobile Chinese Version APK Download: Everything You Need to Know

            -

            If you are a fan of football games on your mobile device, you might have heard of FIFA Mobile, the official mobile game of FIFA, the international governing body of football. FIFA Mobile is a popular game that lets you build your ultimate team, play matches, compete in leagues, and participate in various events. But did you know that there is also a Chinese version of FIFA Mobile that has some unique features and differences from the global version? In this article, we will tell you everything you need to know about FIFA Mobile Chinese version, including what it is, why it is popular, what are its main features and differences, and how to download and install it on your device.

            -

            What is FIFA Mobile Chinese Version and Why It Is Popular

            -

            FIFA Mobile Chinese version, also known as FIFA World of Football or FIFA足球世界 in Chinese, is a football-themed competitive mobile game that is authorized by FIFA and developed by Tencent Games, one of the largest gaming companies in China. It is the only independent mobile game authorized by FIFA in China, and it has more than 10 million downloads on the App Store and Google Play.

            -

            fifa mobile chinese version apk download


            Download Zip ✺✺✺ https://urllie.com/2uNA6M



            -

            FIFA Mobile Chinese version is popular because it offers a smooth and realistic gameplay experience that inherits the classic terminal game operation. It also has more than 10,000 real stars, more than 30 real leagues, and major clubs from all over the world. You can create your own exclusive superstars with a high degree of freedom in player development and customization. Moreover, you can enjoy special events and rewards related to the World Cup 2022, which will be held in Qatar.

            -

            How to Download and Install FIFA Mobile Chinese Version on Your Device

            -

            If you want to try out FIFA Mobile Chinese version on your device, you will need to meet some requirements and follow some steps. Here are the details:

            -

            Requirements and Compatibility

            -

            FIFA Mobile Chinese version is compatible with iOS and Android devices. However, not all devices can run the game smoothly or support all the features. Here are the minimum requirements for downloading FIFA Mobile Chinese version:

            - - - - - - - - - - - - - -
            PlatformMinimum Requirements
            iOS -
              -
            • iPhone 6 or later
            • -
            • iPod Touch 6th Gen or later
            • -
            • iPad Mini 4 or later
            • -
            • iPad Air 2 or later
            • -
            • iPad Pro (2016) or later
            • -
            • iPad (2017) or later
            • -
            • iOS 9.0 or later
            • -
            -
            Android -
              -
            • At least 1GB RAM
            • -
            • Quad Core (with a clock speed of 1.5GHz)
            • -
            • Android OS version 5.0 or later
            • -
            -
            -

            Please note that these requirements may change over time as the game updates. Also, some devices may not be able to play head to head mode or other modes due to hardware limitations. You can check the official website of FIFA Mobile Chinese version for more information on device compatibility.

            -

            Steps to Download and Install the APK File

            -

            Since FIFA Mobile Chinese version is not available on the App Store or Google Play outside China, you will need to download and install the APK file from a trusted source. An APK file is an Android Package file that contains all the elements needed to install an app on your device. Here are the steps to download and install the APK file for FIFA Mobile Chinese version:

            -
              -
            1. Go to a reliable website that offers the APK file for FIFA Mobile Chinese version, such as APKPure or Uptodown.
            2. -
            3. Download the APK file to your device. Make sure you have enough storage space and a stable internet connection.
            4. -
            5. Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
            6. -
            7. Locate the downloaded APK file on your device and tap on it to start the installation process.
            8. -
            9. Follow the instructions on the screen and wait for the installation to complete.
            10. -
            11. Launch the game and enjoy!
            12. -
            -

            If you are using an iOS device, you will need to use a third-party app installer, such as Panda Helper, to download and install FIFA Mobile Chinese version. You will also need to trust the app developer on your device. To do this, go to Settings > General > Device Management and tap on the app developer's name and then tap Trust.

            -

            Steps to Create a QQ or WeChat Account and Log in to the Game

            -

            Since FIFA Mobile Chinese version is developed by Tencent Games, you will need to create a QQ or WeChat account to log in to the game. QQ and WeChat are popular social media platforms in China that offer various services, such as messaging, payments, gaming, and more. Here are the steps to create a QQ or WeChat account and log in to FIFA Mobile Chinese version:

            -

            fifa mobile world cn apk latest version
            -fifa足球世界 apk download for android
            -fifa mobile chinese edition apk free download
            -how to install fifa mobile world cn on android
            -fifa mobile apk china version 2023
            -download fifa mobile chinese version for pc
            -fifa mobile world cn mod apk unlimited money
            -fifa mobile chinese version apk obb
            -fifa mobile world cn apkcombo
            -fifa mobile china edition apk english
            -fifa mobile world cn hack apk
            -fifa mobile chinese version apk pure
            -fifa mobile world cn update apk
            -fifa mobile china version apk offline
            -fifa mobile world cn apk mirror
            -fifa mobile chinese version apk revdl
            -fifa mobile world cn old version apk
            -fifa mobile china version apk data
            -fifa mobile world cn beta apk
            -fifa mobile chinese version apk uptodown
            -fifa mobile world cn 23.0.05 apk
            -fifa mobile china version apk mod menu
            -fifa mobile world cn app for android tv & tablet
            -fifa mobile chinese version apk rexdl
            -fifa mobile world cn 22.0.05 apk
            -fifa mobile china version apk no verification
            -fifa mobile world cn game for android 4.4, 4.3, 4.2, 4.1 / android 5, 6, 7, 8, 9, 10, 11, 12 / pc windows
            -fifa mobile chinese version apk apkpure
            -fifa mobile world cn cracked apk
            -fifa mobile china version apk highly compressed
            -fifa mobile world cn unlimited coins and points apk
            -fifa mobile chinese version apk android republic
            -fifa mobile world cn full unlocked apk
            -fifa mobile china version apk latest update
            -fifa mobile world cn original apk download link
            -fifa mobile chinese version apk google play id com.tencent.fifamobile
            -fifa mobile world cn cheats and tips apk
            -fifa mobile china version apk file size
            -fifa mobile world cn gameplay and review apk
            -fifa mobile chinese version apk developer 深圳市腾讯计算机系统有限公司
            -fifa mobile world cn online and offline mode apk
            -fifa mobile china version apk requirements
            -fifa mobile world cn best players and teams apk
            -fifa mobile chinese version apk features and benefits
            -fifa mobile world cn bugs and issues fix apk
            -fifa mobile china version apk comparison with other versions
            -fifa mobile world cn ratings and feedbacks apk
            -fifa mobile chinese version apk alternatives and similar apps

            -
              -
            1. Download QQ or WeChat app from the App Store or Google Play on your device.
            2. -
            3. Create an account using your phone number, email address, or Facebook account.
            4. -
            5. Verify your account by entering the code sent to your phone number or email address.
            6. -
            7. Launch FIFA Mobile Chinese version and tap on the QQ or WeChat icon on the login screen.
            8. -
            9. Authorize the game to access your QQ or WeChat account by scanning the QR code or entering your username and password.
            10. -
            11. Start playing!
            12. -
            -

            Features and Differences of FIFA Mobile Chinese Version

            -

            FIFA Mobile Chinese version has some features and differences that make it stand out from the global version. Here are some of them:

            -

            More than 10,000 Real Stars, More than 30 Real Leagues, and Major Clubs

            -

            FIFA Mobile Chinese version has a rich and authentic football content that covers more than 10,000 real stars, more than 30 real leagues, and major clubs from all over the world. You can build your dream team with players from different countries, leagues, and clubs, such as Messi, Ronaldo, Neymar, Mbappe, Lewandowski, Salah, De Bruyne, Kane, Benzema, Haaland, Kimmich, Ramos, Van Dijk, Alisson, Oblak, Courtois, Navas, Ter Stegen, Buffon, Casillas, Modric, Kroos, Pogba, Kante, Vidal, Iniesta, Xavi, Pirlo, Zidane, Beckham , and many more. You can also collect and upgrade various player cards, such as base cards, elite cards, master cards, legend cards, and icon cards.

            -

            Two Different Operating Modes: Traditional Roulette Operation and Dot and Stroke Operation

            -

            FIFA Mobile Chinese version offers two different operating modes for you to choose from: traditional roulette operation and dot and stroke operation. The traditional roulette operation is similar to the global version, where you use a virtual joystick and buttons to control your players. The dot and stroke operation is a new and innovative way to play the game, where you use gestures and taps to control your players. You can switch between the two modes at any time in the game settings.

            -

            The dot and stroke operation allows you to perform various actions with simple gestures, such as passing, shooting, dribbling, tackling, crossing, and more. For example, you can tap on a teammate to pass the ball, swipe on the screen to shoot the ball, draw a curve to make a lob pass or a through ball, draw a circle to dribble around an opponent, draw a line to slide tackle, and so on. You can also adjust the sensitivity and speed of the gestures in the game settings.

            -

            Higher Degree of Freedom in Player Development and Customization

            -

            FIFA Mobile Chinese version gives you a higher degree of freedom in player development and customization. You can train your players with various materials, such as training points, training cards, training tokens, training books, and more. You can also upgrade your players' attributes, skills, positions, chemistry, and styles. You can even change your players' appearance, such as their hair style, facial features, tattoos, accessories, and more.

            -

            You can also create your own exclusive superstars with the superstar system. You can choose from different types of superstars, such as striker, midfielder, defender, goalkeeper, captain, free kick master, penalty master, dribbling master, passing master, shooting master, speed master, strength master, and more. You can customize your superstars' appearance, attributes, skills, positions , chemistry, and styles. You can also unlock and equip different superstar skills, such as rainbow flick, roulette, heel to heel, ball roll, sombrero flick, rabona, elastico, and more. You can also upgrade your superstars' level and rank to increase their power and performance.

            -

            Special Events and Rewards Related to the World Cup 2022

            -

            FIFA Mobile Chinese version also features special events and rewards related to the World Cup 2022, which will be held in Qatar. You can participate in various activities, such as qualifying matches, group stage matches, knockout stage matches, and final matches. You can also collect and exchange different items, such as world cup tickets, world cup coins, world cup points, world cup tokens, world cup kits, world cup badges, world cup trophies, and more. You can use these items to redeem various rewards, such as world cup players, world cup superstars, world cup icons, world cup packs, world cup stadiums, and more.

            -

            Conclusion

            -

            FIFA Mobile Chinese version is a football-themed competitive mobile game that is authorized by FIFA and developed by Tencent Games. It has some unique features and differences from the global version, such as more than 10,000 real stars, more than 30 real leagues, and major clubs; two different operating modes: traditional roulette operation and dot and stroke operation; higher degree of freedom in player development and customization; and special events and rewards related to the World Cup 2022. If you want to try out FIFA Mobile Chinese version on your device, you will need to download and install the APK file from a trusted source and create a QQ or WeChat account to log in to the game. FIFA Mobile Chinese version is a fun and exciting game that will give you a new and different experience of playing football on your mobile device. Why not give it a try and see for yourself?

            -

            FAQs

            -

            What is the difference between FIFA Mobile Chinese version and FIFA Mobile global version?

            -

            FIFA Mobile Chinese version and FIFA Mobile global version are both football-themed competitive mobile games that are authorized by FIFA. However, they have some differences in terms of features, content, gameplay, graphics, events, rewards , and more. Some of the main differences are: - FIFA Mobile Chinese version has more than 10,000 real stars, more than 30 real leagues, and major clubs, while FIFA Mobile global version has about 700 clubs and over 17,000 players. - FIFA Mobile Chinese version offers two different operating modes: traditional roulette operation and dot and stroke operation, while FIFA Mobile global version only has the traditional roulette operation. - FIFA Mobile Chinese version has a higher degree of freedom in player development and customization, such as creating your own exclusive superstars, changing your players' appearance, upgrading your players' attributes, skills, positions, chemistry, and styles, and more. - FIFA Mobile Chinese version features special events and rewards related to the World Cup 2022, such as participating in qualifying matches, group stage matches, knockout stage matches, and final matches; collecting and exchanging world cup items; and redeeming world cup players, superstars, icons, packs, stadiums, and more.

            -

            How can I play head to head mode in FIFA Mobile Chinese version?

            -

            Head to head mode is a mode where you can play against other players online in real time. You can choose from different modes, such as friendly match, ranked match, league match, tournament match, and more. You can also chat with your opponent during the match. To play head to head mode in FIFA Mobile Chinese version, you need to meet the following requirements: - Your device must support head to head mode. You can check the official website of FIFA Mobile Chinese version for more information on device compatibility. - Your device must have a stable internet connection. You can check your network status in the game settings. - Your device must have enough battery power. You can check your battery status in the game settings. - Your device must have enough storage space. You can check your storage status in the game settings. - Your game must be updated to the latest version. You can check your game version in the game settings.

            -

            How can I change the language of the game to English or other languages?

            -

            FIFA Mobile Chinese version is mainly designed for Chinese players, so the default language of the game is Chinese. However, you can change the language of the game to English or other languages by following these steps: - Launch FIFA Mobile Chinese version and tap on the gear icon on the top right corner of the screen to enter the game settings. - Tap on the language option on the left side of the screen to enter the language selection menu. - Tap on the language you want to use for the game. You can choose from English, Simplified Chinese, Traditional Chinese , and Korean. You can also tap on the globe icon on the top right corner of the screen to see more languages available for the game. - Tap on the confirm button on the bottom right corner of the screen to apply the language change. You may need to restart the game for the change to take effect.

            -

            How can I get more point coupons, the game currency, in FIFA Mobile Chinese version?

            -

            Point coupons are the game currency in FIFA Mobile Chinese version. You can use them to buy various items, such as player packs, superstar packs, icon packs, world cup packs, and more. You can also use them to upgrade your players, superstars, icons, and more. There are several ways to get more point coupons in FIFA Mobile Chinese version, such as: - Completing daily tasks, achievements, and challenges. You can check your progress and rewards in the game menu. - Participating in various events and activities, such as world cup matches, league matches, tournament matches, and more. You can check the event schedule and rewards in the game menu. - Winning matches and leagues against other players online. You can check your ranking and rewards in the game menu. - Buying point coupons with real money. You can tap on the plus icon on the top right corner of the screen to enter the point coupon store. You can choose from different payment methods, such as QQ wallet, WeChat pay, Alipay, credit card, debit card, and more.

            -

            How can I contact the customer service or report a problem in FIFA Mobile Chinese version?

            -

            If you have any questions or problems related to FIFA Mobile Chinese version, you can contact the customer service or report a problem by following these steps: - Launch FIFA Mobile Chinese version and tap on the gear icon on the top right corner of the screen to enter the game settings. - Tap on the customer service option on the left side of the screen to enter the customer service menu. - Tap on the online service option on the top of the screen to chat with a customer service representative online. You can also tap on the phone service option to call a customer service representative by phone. - Tap on the feedback option on the bottom of the screen to report a problem or give a suggestion. You can choose from different categories, such as gameplay, graphics, sound, network, account, payment , and more. You can also attach screenshots or videos to illustrate your problem or suggestion.

            -

            I hope this article has helped you learn more about FIFA Mobile Chinese version and how to download and install it on your device. FIFA Mobile Chinese version is a fun and exciting game that will give you a new and different experience of playing football on your mobile device. Why not give it a try and see for yourself?

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/pipelines/test_tagging.py b/spaces/fclong/summary/fengshen/pipelines/test_tagging.py deleted file mode 100644 index ab9d8804f3bb27ae8e5bbbc93beb53e07da9b8fc..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/pipelines/test_tagging.py +++ /dev/null @@ -1,22 +0,0 @@ -from fengshen.pipelines.sequence_tagging import SequenceTaggingPipeline -import argparse -import os - -total_parser = argparse.ArgumentParser("test") -total_parser = SequenceTaggingPipeline.add_pipeline_specific_args(total_parser) -args = total_parser.parse_args() -args.data_dir="/cognitive_comp/lujunyu/data_zh/NER_Aligned/weibo" -args.gpus=2 -args.max_epochs=30 -args.decode_type='linear' -args.learning_rate=3e-5 -args.strategy="deepspeed_stage_1" - -os.environ["CUDA_VISIBLE_DEVICES"]="5,6" -# pipe = SequenceTaggingPipeline( -# model_path='/cognitive_comp/lujunyu/NER/outputs/ccks_crf/bert/best_checkpoint', args=args) -# print(pipe('李开复的哥哥在中国共产党读书。')) - -pipe = SequenceTaggingPipeline( - model_path='/cognitive_comp/lujunyu/XinYu/Fengshenbang-LM/fengshen/workspace/bert-base/pretrain', args=args) -pipe.train() diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Isekai Rondo Mod and Enter a Fantasy World of RPG Action.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Isekai Rondo Mod and Enter a Fantasy World of RPG Action.md deleted file mode 100644 index 1c09819bf24989b1de8e8de640920f794612db37..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Isekai Rondo Mod and Enter a Fantasy World of RPG Action.md +++ /dev/null @@ -1,89 +0,0 @@ - -

            Download RPG Isekai Rondo Mod and Enjoy an Immersive Isekai Adventure

            -

            If you are a fan of isekai anime or manga, you might have heard of RPG Isekai Rondo, a pixel-art role-playing game developed by KEMCO. The game follows the story of Sho, a young man who is reincarnated into a parallel universe as Shaw, a powerful Sage with rare passive skills. Along with Viola, a Hero, he embarks on a quest to defeat the evil Overlord and save the world.

            -

            download rpg isekai rondo mod


            Downloadhttps://gohhs.com/2uPq5w



            -

            RPG Isekai Rondo is a fun and engaging game that offers turn-based battles, retro-style graphics, and various features such as UR Passives, Spirit Summoning, Mana Management, Monster Skill Acquire, and more. However, if you want to enjoy the game to its fullest potential, you might want to download RPG Isekai Rondo mod, which gives you access to unlocked features and unlimited resources.

            -

            In this article, we will show you how to download RPG Isekai Rondo mod on your device, what are the features of the mod version, what are some tips and tricks to play the game better, what are some reviews of the game from other players, and some frequently asked questions about the game. Let's get started!

            -

            How to Download RPG Isekai Rondo Mod on Your Device

            -

            Downloading RPG Isekai Rondo mod is very easy and simple. Just follow these steps:

            -
              -
            1. Go to the official website of the mod [text](^1^).
            2. -
            3. Choose your preferred version (Android or iOS) and click on the download button.
            4. -
            5. Install the mod apk file on your device and allow the necessary permissions.
            6. -
            7. Launch the game and enjoy the unlocked features and unlimited resources.
            8. -

            What are the Features of RPG Isekai Rondo Mod?

            -

            RPG Isekai Rondo mod is a modified version of the original game that gives you some advantages and benefits that you won't find in the official version. Here are some of the features of RPG Isekai Rondo mod that you can enjoy:

            -
              -
            • No ads and in-app purchases: One of the most annoying things about mobile games is the constant interruption of ads and the pressure to buy in-game items with real money. With RPG Isekai Rondo mod, you don't have to worry about that. The mod removes all the ads and in-app purchases from the game, so you can play without any distraction or limitation.
            • -
            • Unlimited Magistones, Mana, and Gold: Magistones are the premium currency of the game that you can use to summon spirits, buy items, upgrade skills, and more. Mana is the energy that you need to use skills and items in battle. Gold is the basic currency that you can use to buy equipment, materials, and other things. With RPG Isekai Rondo mod, you can get unlimited amounts of these resources, so you can enjoy the game without worrying about running out of them.
            • -
            • All characters, skills, and passives unlocked: RPG Isekai Rondo has a variety of characters that you can choose from, each with their own element, skills, and passives. However, some of them are locked behind certain conditions or require Magistones to unlock. With RPG Isekai Rondo mod, you can unlock all the characters, skills, and passives from the start, so you can experiment with different combinations and strategies.
            • -
            • Enhanced graphics and sound effects: RPG Isekai Rondo has a pixel-art style that gives it a retro feel, but some players might prefer a more modern look. With RPG Isekai Rondo mod, you can enjoy enhanced graphics and sound effects that make the game more immersive and realistic.
            • -

            What are the Tips and Tricks to Play RPG Isekai Rondo?

            -

            RPG Isekai Rondo is a game that requires some strategy and planning to win. You can't just rely on your skills and items alone, you also need to consider your character's element, the enemy's resistance and weakness, and the effects and passives of your skills and items. Here are some tips and tricks to help you play RPG Isekai Rondo better:

            -
              -
            • Select a character and an element that suits your playstyle: RPG Isekai Rondo has six elements: Fire, Water, Wind, Earth, Light, and Dark. Each element has its own strengths and weaknesses, as well as different types of skills and passives. For example, Fire is good for dealing damage, Water is good for healing, Wind is good for speed, Earth is good for defense, Light is good for support, and Dark is good for debuffing. You can choose a character that matches your preferred element or mix and match different elements to create a balanced team.
            • -
            • Understand the effects and passives of your skills and items: RPG Isekai Rondo has a lot of skills and items that you can use in battle, but not all of them are equally useful. Some skills and items have special effects and passives that can make a big difference in the outcome of the battle. For example, some skills can inflict status ailments on the enemy, such as poison, paralysis, or confusion. Some items can boost your stats, such as attack, defense, or speed. Some passives can trigger certain effects when certain conditions are met, such as healing when low on health or increasing damage when high on mana. You should read the descriptions of your skills and items carefully and use them wisely.
            • -
            • Observe the enemy's resistance level and weakness before attacking: RPG Isekai Rondo has a system of elemental resistance and weakness that affects the damage you deal and receive. Each enemy has a resistance level from 0 to 5 for each element, which indicates how much damage they take from that element. For example, an enemy with a resistance level of 0 for Fire takes normal damage from Fire skills, while an enemy with a resistance level of 5 for Fire takes very little damage from Fire skills. On the other hand, each enemy also has a weakness to one element, which means they take extra damage from that element. For example, an enemy with a weakness to Water takes more damage from Water skills. You should observe the enemy's resistance level and weakness before attacking and use the appropriate element to deal more damage or avoid wasting mana.
            • -
            • Earn extra points by joining guild quests and participating in daily tasks: RPG Isekai Rondo has a feature called Guild Quests, which are special missions that you can join with other players online. Guild Quests offer various rewards such as Magistones, Gold, Mana, Items, Equipment, and more. You can also earn extra points by participating in daily tasks, which are simple objectives that you can complete every day. Daily tasks offer rewards such as Magistones, Gold, Mana, Items, Equipment, and more. You should join guild quests and participate in daily tasks regularly to earn more points and resources.
            • -
            • Explore the battle arena and the dungeon for more challenges and rewards: RPG Isekai Rondo has two modes that offer more challenges and rewards: the Battle Arena and the Dungeon. The Battle Arena is a mode where you can fight against other players online in real-time battles. The Battle Arena offers rewards such as Magistones, Gold, Mana, Items, Equipment, and more. The Dungeon is a mode where you can explore randomly generated dungeons with different levels of difficulty. The Dungeon offers rewards such as Magistones, Gold, Mana, Items, Equipment, and more. You should explore the battle arena and the dungeon occasionally to test your skills and get more rewards.
            • -

            What are the Reviews of RPG Isekai Rondo?

            -

            RPG Isekai Rondo is a game that has received mixed reviews from players and critics. Some people love the game for its nostalgic pixel-art style, its humorous and engaging story, its diverse and customizable characters, and its strategic and challenging gameplay. Others dislike the game for its repetitive and grindy nature, its lack of originality and innovation, its poor translation and localization, and its bugs and glitches. Here are some examples of reviews of RPG Isekai Rondo from different sources:

            -

            How to download rpg isekai rondo mod apk for android
            -RPG isekai rondo mod menu features and benefits
            -Best tips and tricks for rpg isekai rondo mod game
            -RPG isekai rondo mod unlimited currency and moves hack
            -RPG isekai rondo mod review and rating by players
            -RPG isekai rondo mod latest version 1.1.3g download link
            -RPG isekai rondo mod gameplay and walkthrough guide
            -RPG isekai rondo mod characters and skills overview
            -RPG isekai rondo mod free download for ios devices
            -RPG isekai rondo mod online multiplayer mode and co-op
            -RPG isekai rondo mod cheats and codes for easy win
            -RPG isekai rondo mod story and plot summary
            -RPG isekai rondo mod graphics and sound quality comparison
            -RPG isekai rondo mod system requirements and compatibility
            -RPG isekai rondo mod bugs and glitches fix
            -RPG isekai rondo mod update and patch notes
            -RPG isekai rondo mod fan art and wallpapers
            -RPG isekai rondo mod community and forum discussion
            -RPG isekai rondo mod alternatives and similar games
            -RPG isekai rondo mod developer and publisher information
            -RPG isekai rondo mod trailer and teaser video
            -RPG isekai rondo mod awards and nominations
            -RPG isekai rondo mod secrets and easter eggs
            -RPG isekai rondo mod support and contact details
            -RPG isekai rondo mod faq and troubleshooting tips

            - - - - - - - - - - - - - - - - -
            SourceReviewRating
            Pocket Gamer [text]"RPG Isekai Rondo is a charming and enjoyable RPG that pays homage to the classic isekai genre. The game has a lot of content and features to keep you entertained for hours, such as turn-based battles, spirit summoning, monster skill acquire, guild quests, battle arena, dungeon, and more. The game also has a witty and humorous story that will make you laugh and smile. The game's pixel-art graphics are well-done and give the game a retro feel. The game's sound effects and music are also fitting and catchy. The game's mod version is even better, as it gives you unlimited resources and unlocked features that make the game more fun and easy. If you are looking for a fun and engaging RPG that will take you to another world, you should download RPG Isekai Rondo mod today."4/5
            RPG Insanity [text]"RPG Isekai Rondo is a boring and uninspired RPG that fails to deliver on its promise of an immersive isekai adventure. The game has nothing new or original to offer, as it copies and recycles elements from other games and anime. The game's story is bland and cliched, with no depth or emotion. The game's characters are stereotypical and annoying, with no personality or development. The game's gameplay is repetitive and tedious, as you have to grind for levels, items, skills, and passives. The game's graphics are outdated and pixelated, with no detail or animation. The game's sound effects and music are annoying and repetitive, with no variation or quality. The game's mod version is not worth downloading, as it makes the game too easy and boring. If you are looking for a good and original RPG that will challenge you and entertain you, you should avoid RPG Isekai Rondo mod at all costs."1/5

            Conclusion

            -

            RPG Isekai Rondo is a pixel-art role-playing game that lets you experience an isekai adventure with various characters, skills, and passives. The game has a lot of features and modes to keep you entertained, such as turn-based battles, spirit summoning, monster skill acquire, guild quests, battle arena, dungeon, and more. The game also has a humorous and engaging story that will make you laugh and smile. However, if you want to enjoy the game to its fullest potential, you might want to download RPG Isekai Rondo mod, which gives you access to unlocked features and unlimited resources. RPG Isekai Rondo mod is easy and simple to download and install on your device, and it will enhance your gaming experience with no ads, in-app purchases, enhanced graphics, and sound effects. In this article, we have shown you how to download RPG Isekai Rondo mod on your device, what are the features of the mod version, what are some tips and tricks to play the game better, what are some reviews of the game from other players, and some frequently asked questions about the game. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to contact us. Thank you for reading and happy gaming!

            -

            FAQs

            -

            Here are some of the most frequently asked questions about RPG Isekai Rondo:

            -
              -
            1. What is the genre of RPG Isekai Rondo?
            2. -

              RPG Isekai Rondo is a role-playing game that belongs to the isekai genre. Isekai is a Japanese term that means "another world" or "different world". It refers to a genre of anime, manga, light novels, and games that feature a normal person who is transported or reincarnated into a fantasy or parallel world. RPG Isekai Rondo follows this genre by having the protagonist Sho reincarnated into a parallel world as Shaw, a powerful Sage.

              -
            3. What is the story of RPG Isekai Rondo?
            4. -

              RPG Isekai Rondo follows the story of Sho, a young man who is reincarnated into a parallel world as Shaw, a powerful Sage with rare passive skills. Along with Viola, a Hero who is also reincarnated from another world, he embarks on a quest to defeat the evil Overlord who threatens to destroy the world. Along the way, he meets various characters who join his party or become his enemies. He also learns more about the secrets of the parallel world and his own destiny.

              -
            5. What are UR Passives and how to use them?
            6. -

              UR Passives are unique passive skills that only certain characters have in RPG Isekai Rondo. They are marked with a UR symbol on their skill icons. UR Passives have powerful effects that can change the tide of battle, such as increasing damage, healing allies, reducing mana cost, and more. However, UR Passives also have certain conditions that need to be met before they can be activated, such as having a certain amount of mana or health, using a certain skill or item, or being in a certain element or mode. You should read the descriptions of your UR Passives carefully and use them strategically.

              -
            7. How to get more Magistones in RPG Isekai Rondo?
            8. -

              Magistones are the premium currency of RPG Isekai Rondo that you can use to summon spirits, buy items, upgrade skills, and more. You can get more Magistones by doing various things in the game, such as completing quests, participating in events, joining guilds, fighting in the battle arena, exploring the dungeon, and more. You can also get more Magistones by downloading RPG Isekai Rondo mod, which gives you unlimited Magistones for free.

              -
            9. How to contact the developer of RPG Isekai Rondo?
            10. -

              If you have any questions or feedback about RPG Isekai Rondo, you can contact the developer of the game by sending an email to support@kemco.jp or by visiting their official website [text]. You can also follow them on their social media accounts [text] for updates and news about the game.

              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download NBA 2K20 APK for Android and Enjoy the Ultimate 2K Experience.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download NBA 2K20 APK for Android and Enjoy the Ultimate 2K Experience.md deleted file mode 100644 index ada20f4e16a6d2fdd99ca9c918214e3a4e4fa2b6..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download NBA 2K20 APK for Android and Enjoy the Ultimate 2K Experience.md +++ /dev/null @@ -1,137 +0,0 @@ - -

            NBA 2K20 APK 2021: How to Download and Play the Best Basketball Game on Your Android Device

            -

            If you are a fan of basketball and want to experience the thrill of playing with your favorite NBA stars on your mobile device, then you should definitely check out NBA 2K20 APK 2021. This is the latest version of the popular NBA 2K series, which offers realistic graphics, smooth gameplay, and a variety of game modes for all players. In this article, we will show you what NBA 2K20 APK 2021 is, how to download it, and how to play it.

            -

            nba 2k20 apk 2021


            Download - https://gohhs.com/2uPr1q



            -

            What is NBA 2K20 APK 2021?

            -

            NBA 2K20 APK 2021 is an Android app that allows you to play NBA 2K20, the best basketball simulation game on the market, on your smartphone or tablet. NBA 2K20 was originally released in September 2019 for Windows, PlayStation 4, Xbox One, Nintendo Switch, iOS, and Android devices. However, the official version of the game requires a lot of storage space and may not be compatible with some older devices. That's why some developers have created modified versions of the game, called APKs, that are smaller in size and easier to install.

            -

            Features of NBA 2K20 APK 2021

            -

            NBA 2K20 APK 2021 has many features that make it an amazing basketball game for Android users. Some of these features are:

            -
              -
            • All new Run The Streets mode: For the first time in any NBA 2K game, you can take your MyPLAYER around the world in a series of 3-on-3 streetball competitions. You can get on a hot streak and takeover the game with improved abilities and attributes. You can also compete against other players for a place on the Ranked Leaderboard or see how far you can go through the Championship.
            • -
            • NBA Stories returns: You can experience the history of some of the most famous NBA players and teams with five new NBA Stories to play through. You can relive some of the greatest moments and challenges in the careers of legends like Kobe Bryant, Shaquille O'Neal, LeBron James, and more.
            • -
            • New MyCAREER story: You can build your MyPLAYER and go on your journey from college to the NBA. You can choose your position, customize your appearance, select your skills, and more. You can also interact with other characters, make decisions that affect your path, and earn endorsements and fans.
            • -
            • The Association: You can take control of a team as the GM. You can manage the roster, scout and draft the incoming rookie class, handle the budget, and more. You can also simulate games or play them yourself.
            • -
            • Multiplayer: You can find opponents easier and faster than ever before with a new Quick Match feature. You can connect with other players through LAN or Google Play Games to play 5-on-5 matches or Blacktop games.
            • -
            • New 2K Beats soundtrack: You can enjoy a new soundtrack that accompanies you on your journey to the top of the NBA, featuring songs from Drake, Diplo, T-Pain, and more.
            • -
            -

            Requirements for NBA 2K20 APK 2021

            -

            To play NBA 2K20 APK 2021 on your Android device, you need to meet some minimum requirements

            These are the minimum requirements for NBA 2K20 APK 2021:

            - - - - - - - - - - - - - - - -
            OSRAMStorageProcessorGraphics
            Android 8.0 or higher (Android 9.0 or higher recommended)4 GB or more3.1 GB or more of free spaceAny compatible deviceAny compatible device
            -

            How to Download NBA 2K20 APK 2021?

            -

            There are many websites that offer NBA 2K20 APK 2021 for download, but not all of them are safe and reliable. Some of them may contain viruses, malware, or fake files that can harm your device or steal your data. Therefore, you should be careful and choose a trusted source to download NBA 2K20 APK 2021. One of the best websites that we recommend is APKCombo, which is a reputable and secure platform that provides various APKs for Android users.

            -

            nba 2k20 apk download for android free
            -nba 2k20 apk mod unlimited money
            -nba 2k20 apk obb offline
            -nba 2k20 apk latest version
            -nba 2k20 apk data highly compressed
            -nba 2k20 apk revdl
            -nba 2k20 apk andropalace
            -nba 2k20 apk rexdl
            -nba 2k20 apk no verification
            -nba 2k20 apk update roster
            -nba 2k20 apk full game
            -nba 2k20 apk hack
            -nba 2k20 apk cracked
            -nba 2k20 apk mirror
            -nba 2k20 apk pure
            -nba 2k20 apk uptodown
            -nba 2k20 apk apkpure
            -nba 2k20 apk mob.org
            -nba 2k20 apk android republic
            -nba 2k20 apk blackmod
            -nba 2k20 apk platinmods
            -nba 2k20 apk mali
            -nba 2k20 apk adreno
            -nba 2k20 apk power vr
            -nba 2k20 apk all devices
            -nba 2k20 apk low end device
            -nba 2k20 apk compatible device list
            -nba 2k20 apk system requirements
            -nba 2k20 apk gameplay
            -nba 2k20 apk features
            -nba 2k20 apk tips and tricks
            -nba 2k20 apk cheats codes
            -nba 2k20 apk best settings
            -nba 2k20 apk controller support
            -nba 2k20 apk multiplayer mode
            -nba 2k20 apk my career mode
            -nba 2k20 apk run the streets mode
            -nba 2k20 apk association mode
            -nba 2k20 apk blacktop mode
            -nba 2k20 apk new stories mode
            -nba 2k20 apk real draft class
            -nba 2k20 apk roster update download
            -nba 2k20 apk cyberface download
            -nba 2k20 apk jersey download
            -nba 2k20 apk court download
            -nba 2k20 apk soundtrack download
            -nba 2k20 apk how to install
            -nba 2k20 apk error fix
            -nba 2k20 apk reddit

            -

            Steps to Download NBA 2K20 APK 2021 from APKCombo

            -

            To download NBA 2K20 APK 2021 from APKCombo, you need to follow these steps:

            -
              -
            1. Go to the official website of APKCombo by clicking [here](^1^).
            2. -
            3. Type "NBA 2K20" in the search box and press enter.
            4. -
            5. Select the NBA 2K20 app from the list of results and click on the "Download" button.
            6. -
            7. Choose the version that you want to download and click on the "Download APK" button.
            8. -
            9. Wait for the download to finish and save the file in your device.
            10. -
            -

            Steps to Install NBA 2K20 APK 2021 on Your Android Device

            -

            To install NBA 2K20 APK 2021 on your Android device, you need to follow these steps:

            -
              -
            1. Locate the downloaded file in your device and tap on it.
            2. -
            3. If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", go to your device settings and enable the option to allow installation from unknown sources.
            4. -
            5. Follow the instructions on the screen and complete the installation process.
            6. -
            7. Launch the game and enjoy playing NBA 2K20 on your Android device.
            8. -
            -

            How to Play NBA 2K20 APK 2021?

            Once you have installed NBA 2K20 APK 2021 on your Android device, you can start playing the game and enjoy its features. Here are some tips on how to play NBA 2K20 APK 2021:

            -

            Game Modes in NBA 2K20 APK 2021

            -

            NBA 2K20 APK 2021 offers different game modes for different types of players. You can choose the game mode that suits your preference and skill level. Here are the main game modes in NBA 2K20 APK 2021:

            -
              -
            • Run The Streets: This is a new game mode that lets you take your MyPLAYER around the world in a series of 3-on-3 streetball competitions. You can customize your MyPLAYER with different outfits, accessories, tattoos, and hairstyles. You can also upgrade your skills and attributes as you progress through the mode. You can compete against other players online or offline for a place on the Ranked Leaderboard or see how far you can go through the Championship.
            • -
            • NBA Stories: This is a game mode that lets you experience the history of some of the most famous NBA players and teams with five new NBA Stories to play through. You can relive some of the greatest moments and challenges in the careers of legends like Kobe Bryant, Shaquille O'Neal, LeBron James, and more. You can also unlock various rewards and items as you complete each story.
            • -
            • MyCAREER: This is a game mode that lets you build your MyPLAYER and go on your journey from college to the NBA. You can choose your position, customize your appearance, select your skills, and more. You can also interact with other characters, make decisions that affect your path, and earn endorsements and fans. You can also play with or against other players online or offline in various modes such as Park, Pro-Am, Rec Center, and Neighborhood.
            • -
            • The Association: This is a game mode that lets you take control of a team as the GM. You can manage the roster, scout and draft the incoming rookie class, handle the budget, and more. You can also simulate games or play them yourself. You can also trade players, sign free agents, negotiate contracts, and more. You can also play with or against other players online or offline in various modes such as Play Now Online, Online League, MyGM Online, and MyLEAGUE Online.
            • -
            -

            Tips and Tricks for NBA 2K20 APK 2021

            -

            NBA 2K20 APK 2021 is a challenging and fun game that requires skill and strategy to master. Here are some tips and tricks that can help you improve your game and win more matches:

            -
              -
            • Practice your shooting: Shooting is one of the most important skills in basketball, and NBA 2K20 APK 2021 has a realistic shooting system that requires timing and accuracy. You can practice your shooting in various modes such as Freestyle, Shootaround, or Training Camp. You can also adjust the difficulty level, camera angle, shot feedback, and shot meter settings to suit your preference.
            • -
            • Learn the controls: NBA 2K20 APK 2021 has a complex and intuitive control system that allows you to perform various moves and actions on the court. You can learn the basic and advanced controls in various modes such as Tutorial, Tips & Tricks, or Controller Settings. You can also customize the buttons and gestures to suit your preference.
            • -
            • Use the right players: NBA 2K20 APK 2021 has a large roster of players with different ratings, skills, attributes, and tendencies. You should use the right players for the right situations and match-ups. For example, you should use a fast and agile player for driving to the basket, a tall and strong player for rebounding and defending, a sharpshooter for shooting from long range, etc.
            • -
            • Play smart: NBA 2K20 APK 2021 is a realistic basketball game that requires strategy and teamwork to win. You should play smart by using effective plays, making good passes, taking smart shots, avoiding turnovers, playing defense, etc. You should also adapt to your opponent's style and exploit their weaknesses.
            • -
            -

            Conclusion

            -

            NBA 2K20 APK 2021 is an amazing basketball game that offers realistic graphics, smooth gameplay, and a variety of game modes for all players. It is easy to download and install on your Android device using APKCombo website. It is also fun to play with or against other players online or offline. If you are a fan of basketball and want to experience the thrill of playing with your favorite NBA stars on your mobile device, then you should definitely try NBA 2K20 APK 2021. You will not regret it!

            -

            FAQs

            -

            Here are some frequently asked questions about NBA 2K20 APK 2021:

            -
              -
            1. Is NBA 2K20 APK 2021 safe to download and install?
              -Yes, NBA 2K20 APK 2021 is safe to download and install if you use a trusted source like APKCombo. However, you should always be careful and scan the file for viruses or malware before installing it on your device.
            2. -
            3. Is NBA 2K20 APK 2021 free to play?
              -Yes, NBA 2K20 APK 2021 is free to play, but it may contain some in-app purchases or ads that require real money. You can disable these features in the game settings or use a modded version of the game that removes them.
            4. -
            5. Can I play NBA 2K20 APK 2021 offline?
              -Yes, you can play NBA 2K20 APK 2021 offline, but you will need an internet connection to download and install the game, as well as to access some online features such as multiplayer, leaderboards, or updates.
            6. -
            7. Can I play NBA 2K20 APK 2021 with a controller?
              -Yes, you can play NBA 2K20 APK 2021 with a controller, but you will need a compatible device and a Bluetooth connection. You can also customize the controller settings in the game options.
            8. -
            9. Can I transfer my progress from NBA 2K20 to NBA 2K20 APK 2021?
              -No, you cannot transfer your progress from NBA 2K20 to NBA 2K20 APK 2021, as they are different versions of the game. You will need to start from scratch and create a new MyPLAYER or MyTEAM in NBA 2K20 APK 2021.
            10. -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download One Piece Voyage Chronicles APK and Unleash Your Pirate Power.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download One Piece Voyage Chronicles APK and Unleash Your Pirate Power.md deleted file mode 100644 index 609322b4509ab87a87e99605d2f1104ffcfb3c5c..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download One Piece Voyage Chronicles APK and Unleash Your Pirate Power.md +++ /dev/null @@ -1,149 +0,0 @@ -
            -

            How to Download One Piece Voyage Chronicles APK for Android

            -

            If you are a fan of the popular anime series One Piece, you might be interested in playing one of the best action and combat games based on it. In this article, we will show you how to download and install One Piece Voyage Chronicles APK, a game that lets you battle with your favorite characters from the show and relive their epic adventures. We will also give you some tips and tricks on how to enjoy the game to the fullest on your Android device.

            -

            What is One Piece Voyage Chronicles APK?

            -

            A brief introduction to the game and its features

            -

            One Piece Voyage Chronicles APK is a game developed by OLYMPEAK TECHNOLOGY CO. LTD, based on the famous manga and anime series One Piece. The game follows the story of Monkey D. Luffy, a young pirate who dreams of becoming the king of all pirates by finding the legendary treasure called One Piece. Along his journey, he meets and recruits many other pirates who share his passion and ideals, forming the Straw Hat Pirates crew.

            -

            download one piece voyage chronicles apk


            Download Filehttps://gohhs.com/2uPpgs



            -

            The game features hundreds of characters from the series, each with their own skills and abilities. You can create your own team of pirates and use different tactics and strategies to defeat your enemies. You can also explore various locations from the series, such as East Blue, Grand Line, New World, and more. The game has stunning graphics and animations that will make you feel like you are part of the One Piece universe.

            -

            The benefits of downloading the APK version

            -

            One Piece Voyage Chronicles APK is not available on Google Play Store, but you can still download it from other sources, such as Uptodown. By downloading the APK version, you can enjoy some benefits that are not possible with the official version. For example:

            -
              -
            • You can access the game even if it is not available in your region or country.
            • -
            • You can get automatic updates without waiting for them to be released on Google Play Store.
            • -
            • You can roll back to any previous version if you encounter any problems or bugs with the latest version.
            • -
            • You can install the game on any Android device that meets the minimum requirements, regardless of its brand or model.
            • -
            -

            However, downloading the APK version also comes with some risks that you should be aware of. For example:

            -
              -
            • You might download a fake or malicious file that could harm your device or compromise your privacy.
            • -
            • You might violate some terms and conditions of the game or Google Play Store by installing an unofficial version.
            • -
            • You might encounter some compatibility or performance issues that are not present in the official version.
            • -
            -

            Therefore, you should always download the APK file from a trusted and reliable source, such as Uptodown, which scans all its files with VirusTotal and provides detailed information about them. You should also check the reviews and ratings of other users before downloading any file.

            -

            How to download and install One Piece Voyage Chronicles APK from Uptodown

            -

            Step-by-step instructions with screenshots

            -

            Here are the steps you need to follow to download and install One Piece Voyage Chronicles APK from U ptodown:

            -
              -
            1. Go to the Uptodown website and search for One Piece Voyage Chronicles APK in the search bar. Alternatively, you can use this link: https://one-piece-voyage-chronicles.en.uptodown.com/android
            2. -
            3. Click on the green Download button and wait for the file to be downloaded to your device. The file size is about 1.5 GB, so make sure you have enough space and a stable internet connection.
            4. -
            5. Once the download is complete, go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install the APK file that you downloaded from Uptodown.
            6. -
            7. Locate the APK file in your device's file manager and tap on it to start the installation process. Follow the instructions on the screen and grant the necessary permissions to the app.
            8. -
            9. After the installation is done, you can launch the game from your app drawer or home screen and enjoy One Piece Voyage Chronicles APK on your Android device.
            10. -
            -

            Here are some screenshots of the download and installation process:

            -

            How to download one piece voyage chronicles apk for android
            -One piece voyage chronicles apk latest version free download
            -One piece voyage chronicles apk mod unlimited money and gems
            -Download one piece voyage chronicles xapk from uptodown
            -One piece voyage chronicles apk offline installer
            -One piece voyage chronicles apk obb data download
            -One piece voyage chronicles apk hack cheats tool
            -Download one piece voyage chronicles apk for pc windows 10
            -One piece voyage chronicles apk gameplay review
            -One piece voyage chronicles apk download link
            -One piece voyage chronicles apk update patch notes
            -One piece voyage chronicles apk best characters and skills
            -Download one piece voyage chronicles apk from apkcombo
            -One piece voyage chronicles apk tips and tricks guide
            -One piece voyage chronicles apk error fix and troubleshooting
            -Download one piece voyage chronicles apk for ios iphone ipad
            -One piece voyage chronicles apk size and requirements
            -One piece voyage chronicles apk features and benefits
            -Download one piece voyage chronicles apk from apkpure
            -One piece voyage chronicles apk ratings and reviews
            -One piece voyage chronicles apk alternatives and similar games
            -Download one piece voyage chronicles apk for mac os x
            -One piece voyage chronicles apk support and contact information
            -One piece voyage chronicles apk redeem codes and coupons
            -Download one piece voyage chronicles apk from google play store
            -One piece voyage chronicles apk wiki and fandom page
            -One piece voyage chronicles apk online multiplayer mode
            -Download one piece voyage chronicles apk for amazon fire tablet
            -One piece voyage chronicles apk developer and publisher information
            -One piece voyage chronicles apk screenshots and videos
            -Download one piece voyage chronicles apk for android tv box
            -One piece voyage chronicles apk forum and community discussion
            -One piece voyage chronicles apk events and rewards
            -Download one piece voyage chronicles apk for chromebook
            -One piece voyage chronicles apk news and updates
            -One piece voyage chronicles apk walkthrough and strategy guide
            -Download one piece voyage chronicles apk for nintendo switch
            -One piece voyage chronicles apk faq and help page
            -One piece voyage chronicles apk clans and groups
            -Download one piece voyage chronicles apk for blackberry phone
            -One piece voyage chronicles apk release date and history
            -One piece voyage chronicles apk system requirements test
            -Download one piece voyage chronicles apk for linux ubuntu
            -One piece voyage chronicles apk bugs and issues report
            -One piece voyage chronicles apk fan art and wallpapers
            -Download one piece voyage chronicles apk for windows phone 8.1

            - - - - - - - - - - - -
            Screenshot 1Screenshot 2Screenshot 3
            Screenshot 4Screenshot 5Screenshot 6
            -

            Tips and warnings for a safe and smooth installation

            -

            Here are some tips and warnings that you should keep in mind when downloading and installing One Piece Voyage Chronicles APK from Uptodown:

            -
              -
            • Make sure you download the latest version of the APK file, which is 1.0.0 as of June 2023. Older versions might not work properly or have some bugs or errors.
            • -
            • Make sure you download the APK file from Uptodown only, as other sources might not be safe or reliable. Uptodown verifies all its files with VirusTotal and provides detailed information about them.
            • -
            • Make sure you have enough space on your device to install the game, as it requires about 2 GB of storage after installation. You can delete some unwanted or unused apps or files to free up some space.
            • -
            • Make sure you have a stable internet connection when downloading and installing the game, as it might take some time depending on your speed. You can use Wi-Fi instead of mobile data to save some costs and avoid interruptions.
            • -
            • Make sure you disable the option to install apps from unknown sources after installing the game, as it might pose some security risks to your device. You can enable it again if you want to install other APK files in the future.
            • -
            • Make sure you do not modify or tamper with the APK file or the game files, as it might cause some problems or errors with the game. You can also get banned from playing online if you use any cheats or hacks.
            • -
            -

            How to enjoy One Piece Voyage Chronicles APK on your Android device

            -

            The main gameplay modes and features

            -

            One Piece Voyage Chronicles APK is a game that offers a lot of fun and excitement for fans of One Piece and action games. The game has several gameplay modes and features that you can enjoy on your Android device, such as:

            -
              -
            • Story Mode: This mode lets you follow the original story of One Piece from the beginning to the latest arc. You can relive the epic battles and adventures of Luffy and his crew as they face various enemies and challenges along their way to find One Piece. You can also unlock new characters and scenes as you progress through the story.
            • -
            • Arena Mode: This mode lets you compete with other players online in real-time battles. You can choose your own team of pirates and use different skills and strategies to defeat your opponents. You can also earn rewards and rankings based on your performance.
            • -
            • Trial Mode: This mode lets you test your skills and abilities in various challenges and missions. You can earn coins and gems by completing different tasks and objectives, such as defeating a certain number of enemies, surviving for a certain amount of time, or reaching a certain score.
            • -
            • Guild Mode: This mode lets you join a guild or create your own guild with other players. You can chat with your guild members, share resources, and participate in guild wars and events. You can also cooperate with your guild members to complete quests and missions together.
            • Adventure Mode: This mode lets you explore the vast world of One Piece and discover new islands and secrets. You can collect resources, items, and treasures by exploring different areas and interacting with various characters and objects. You can also encounter random events and enemies that will challenge you.
            • -
            -

            The best characters and skills to use

            -

            One Piece Voyage Chronicles APK has a huge roster of characters that you can choose from, each with their own skills and abilities. You can create your own team of pirates by selecting up to four characters and assigning them different roles, such as leader, attacker, defender, or supporter. You can also customize your characters by upgrading their stats, skills, and equipment.

            -

            Some of the best characters and skills to use in the game are:

            -
              -
            • Luffy: The main protagonist of the series and the leader of the Straw Hat Pirates. He has the power of the Gum-Gum Fruit, which makes his body stretchy and elastic. He can use various skills, such as Gum-Gum Pistol, Gum-Gum Gatling, Gum-Gum Jet Bazooka, and Gear Second. He is a versatile character that can deal high damage and move fast.
            • -
            • Zoro: The first mate of the Straw Hat Pirates and a master swordsman. He uses three swords in his fighting style, called Santoryu. He can use various skills, such as Oni Giri, Tatsumaki, Sanzen Sekai, and Asura. He is a powerful character that can deal massive damage and withstand attacks.
            • -
            • Nami: The navigator of the Straw Hat Pirates and a skilled thief. She uses a weapon called Clima-Tact, which allows her to manipulate the weather and create various effects. She can use various skills, such as Thunderbolt Tempo, Mirage Tempo, Thunder Lance Tempo, and Weather Egg. She is a smart character that can support her allies and debuff her enemies.
            • -
            • Sanji: The cook of the Straw Hat Pirates and a formidable fighter. He uses his legs as his main weapons, as he does not want to damage his hands for cooking. He can use various skills, such as Diable Jambe, Concasse, Collier Shoot, and Hell Memories. He is a fast character that can deal fire damage and evade attacks.
            • -
            • Chopper: The doctor of the Straw Hat Pirates and a reindeer-human hybrid. He has the power of the Human-Human Fruit, which allows him to transform into different forms with different abilities. He can use various skills, such as Brain Point, Heavy Point, Guard Point, and Monster Point. He is a cute character that can heal his allies and change his role depending on his form.
            • -
            -

            How to join a clan and play with friends online

            -

            One Piece Voyage Chronicles APK is a game that you can enjoy with your friends online. You can join a clan or create your own clan with other players who share your passion for One Piece. By joining a clan, you can chat with your clan members, share resources, and participate in clan wars and events. You can also cooperate with your clan members to complete quests and missions together.

            -

            To join a clan or create your own clan in One Piece Voyage Chronicles APK, you need to follow these steps:

            -
              -
            1. Go to the main menu of the game and tap on the Clan icon on the bottom right corner.
            2. -
            3. If you want to join an existing clan, you can search for one by name or ID, or browse through the recommended clans. You can also apply for any clan that has an open slot and wait for their approval.
            4. -
            5. If you want to create your own clan, you need to tap on the Create Clan button on the top right corner. You need to enter a name for your clan, choose an emblem for your clan, set some rules and requirements for your clan, and pay some coins as a fee.
            6. -
            7. Once you have joined or created a clan, you can access the Clan menu where you can see your clan information, chat with your clan members, donate resources to your clan bank, check your clan ranking and achievements, and join or start clan wars and events.
            8. -
            9. To play with your friends online in One Piece Voyage Chronicles APK, you need to go to the Arena mode or the Adventure mode and tap on the Invite Friends button on the top right corner. You can invite any of your friends who are online or send them a request to join you. You can also join any of your friends who are already playing by tapping on their name on the Friends list.
            10. -
            -

            Conclusion

            -

            A summary of the main points and a call to action

            -

            One Piece Voyage Chronicles APK is a game that every fan of One Piece should try out. It is a game that lets you experience the thrilling and adventurous world of One Piece on your Android device. You can battle with hundreds of characters from the series, explore various locations, and join a clan with your friends online. You can also enjoy the stunning graphics and animations that will make you feel like you are part of the One Piece universe.

            -

            If you want to download and install One Piece Voyage Chronicles APK, you can follow the steps we have provided in this article. You can also use the tips and tricks we have shared to make the most out of the game. However, you should always be careful and responsible when downloading and installing any APK file from any source. You should also respect the terms and conditions of the game and Google Play Store.

            -

            So, what are you waiting for? Download One Piece Voyage Chronicles APK now and join Luffy and his crew in their quest to find One Piece and become the king of all pirates!

            -

            FAQs

            -

            Is One Piece Voyage Chronicles APK free to play?

            -

            Yes, One Piece Voyage Chronicles APK is free to play. However, the game also offers some in-app purchases that can enhance your gameplay experience. You can buy coins, gems, and other items that can help you upgrade your characters, skills, and equipment. You can also buy some premium features, such as VIP membership, that can give you some exclusive benefits and rewards. However, these purchases are optional and not necessary to enjoy the game.

            -

            Is One Piece Voyage Chronicles APK compatible with my device?

            -

            One Piece Voyage Chronicles APK is compatible with most Android devices that run on Android 4.4 or higher. However, some devices might not be able to run the game smoothly or at all due to their specifications or performance. You can check the minimum requirements of the game on Uptodown before downloading it. You can also check the reviews and ratings of other users who have downloaded the game on their devices.

            -

            Is One Piece Voyage Chronicles APK safe and legal to download?

            -

            One Piece Voyage Chronicles APK is safe and legal to download as long as you download it from a trusted and reliable source, such as Uptodown. Uptodown verifies all its files with VirusTotal and provides detailed information about them. You should also scan the file with your own antivirus software before installing it on your device. However, you should also be aware of the risks and consequences of downloading and installing any unofficial version of any app or game. You should respect the terms and conditions of the game and Google Play Store.

            -

            How can I update One Piece Voyage Chronicles APK to the latest version?

            -

            One Piece Voyage Chronicles APK updates automatically whenever there is a new version available. You do not need to wait for it to be released on Google Play Store or download it manually from Uptodown. However, you can also check for updates manually by going to the Uptodown website and searching for One Piece Voyage Chronicles APK. You can then download and install the latest version of the file on your device.

            -

            Where can I find more information and support for One Piece Voyage Chronicles APK?

            -

            If you need more information or support for One Piece Voyage Chronicles APK, you can visit the official website of the game at https://www.onepiecevc.com/. You can also contact the customer service team by sending an email to service@onepiecevc.com. You can also join the official Facebook page of the game at https://www.facebook.com/onepiecevc. There, you can find news, updates, events, tips, guides, videos, screenshots, fan art, and more about the game.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Roblox Latest Version and Discover Millions of Experiences.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Roblox Latest Version and Discover Millions of Experiences.md deleted file mode 100644 index 758038f974d57100097957f9905e379858103e86..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Roblox Latest Version and Discover Millions of Experiences.md +++ /dev/null @@ -1,121 +0,0 @@ - -

            Roblox New Update APK Download: Everything You Need to Know

            -

            Roblox is one of the most popular games in the world, with over 32.6 million daily active users and 8 million active creators. It's not just a game, but a platform where you can create, share, and play millions of experiences across a variety of genres and themes. Whether you want to go on an epic adventure, compete against rivals, or just hang out with friends, there's something for everyone on Roblox.

            -

            roblox new update apk download


            Download File 🌟 https://gohhs.com/2uPrii



            -

            If you're an Android user, you might be wondering how to download and install the latest Roblox update for your device. In this article, we'll show you how to do that, as well as what's new in the latest update, how to troubleshoot common issues, and how to get the most out of Roblox on your Android device.

            -

            What is Roblox and why is it so popular?

            -

            Roblox is a platform where you can create, share, and play millions of games. You can use Roblox Studio, a free tool that lets you design your own experiences using simple drag-and-drop tools or more advanced coding languages. You can also explore the infinite metaverse of Roblox, where you can join any of the games created by other users from around the world.

            -

            Roblox has many features and benefits for players and creators. Here are some of them:

            -

            roblox latest version apk download
            -roblox new update android download
            -roblox apk download for free
            -roblox new update 2023 apk download
            -roblox apk download latest update
            -roblox new update apk mod download
            -roblox apk download for android
            -roblox new update apk free download
            -roblox apk download new version
            -roblox new update apk download 2023
            -roblox apk download for pc
            -roblox new update apk file download
            -roblox apk download updated version
            -roblox new update apk hack download
            -roblox apk download for tablet
            -roblox new update apk obb download
            -roblox apk download old version
            -roblox new update apk unlimited robux download
            -roblox apk download offline installer
            -roblox new update apk data download
            -roblox apk download online play
            -roblox new update apk full version download
            -roblox apk download original
            -roblox new update apk no verification download
            -roblox apk download play store
            -roblox new update apk cracked download
            -roblox apk download pure
            -roblox new update apk mirror download
            -roblox apk download pro
            -roblox new update apk revdl download
            -roblox apk download uptodown
            -roblox new update apk rexdl download
            -roblox apk download unblocked
            -roblox new update apk vip download
            -roblox apk download windows 10
            -roblox new update apk xda download
            -roblox apk download xbox one
            -roblox new update apk zip download
            -roblox apk download youtube video downloader

            -
              -
            • You can be anything you can imagine. You can customize your avatar with tons of hats, shirts, faces, gear, and more. You can also create your own items and sell them on the marketplace.
            • -
            • You can play together anytime, anywhere. Roblox features full cross-platform support, meaning you can join your friends and millions of other players on their computers, mobile devices, Xbox One, or VR headsets.
            • -
            • You can learn valuable skills. Roblox teaches you coding, game design, entrepreneurship, and more. You can also access free educational resources and tutorials to help you improve your skills.
            • -
            • You can have fun and express yourself. Roblox is a place where you can unleash your creativity and imagination. You can also chat with friends, join groups, participate in events, earn badges, and more.
            • -
            -

            How to download and install the latest Roblox update for Android devices

            -

            To download and install the latest Roblox update for your Android device, follow these steps:

            -
              -
            1. Visit the official Roblox website or Google Play Store. You can either go to https://www.roblox.com/?Downloads:Install_distribution:Windows or search for "Roblox" on the Google Play Store app on your device.
            2. -
            3. Download the Roblox APK file and allow installation from unknown sources. If you download from the website, you'll need to enable installation from unknown sources on your device. To do that, go to Settings > Security > Unknown Sources and toggle it on. If you download from the Google Play Store, you don't need to do this step.
            4. -
            5. Launch the app and log in with your account or create a new one. Once the installation is complete, you can open the app and enter your username and password or sign up for a free account if you don't have one already.
            6. -
            -

            Congratulations, you have successfully downloaded and installed the latest Roblox update for your Android device. Now you can enjoy the endless possibilities of Roblox on your mobile device.

            -

            What's new in the latest Roblox update?

            -

            The latest Roblox update for Android devices was released on June 21, 2023. It has a version number of 2.495.268 and a file size of 101 MB. Here are some of the new features and improvements in this update:

            -
              -
            • Bug fixes and improvements for speed and reliability. The developers have fixed some of the issues that caused the app to crash, freeze, or lag. They have also optimized the app for better performance and stability.
            • -
            • New experiences and items to explore and enjoy. The update brings new games and content to Roblox, such as the Summer Camp event, where you can join a team and compete in various challenges. You can also find new items and accessories in the catalog, such as sunglasses, hats, backpacks, and more.
            • -
            • Enhanced cross-platform support and compatibility. The update makes it easier for you to play with your friends and other players across different devices and platforms. You can also access more games and features that were previously unavailable or limited on Android devices.
            • -
            -

            The latest Roblox update for Android devices is a must-have for any Roblox fan. It offers a smoother, faster, and more fun experience for both players and creators.

            -

            How to troubleshoot common issues with Roblox on Android devices

            -

            Roblox is a great app, but sometimes it can have some problems or errors that prevent you from enjoying it fully. Here are some of the common issues with Roblox on Android devices and how to fix them:

            -
              -
            • Restart the app or your device if it crashes or freezes. Sometimes, the app can stop working or respond slowly due to various reasons, such as low memory, high CPU usage, or network issues. To fix this, you can try closing the app and reopening it, or turning off your device and turning it back on.
            • -
            • Check your internet connection and firewall settings if it fails to load or connect. Roblox requires a stable and fast internet connection to run properly. If you have a weak or unstable signal, you might experience loading or connection errors. To fix this, you can try moving closer to your router, switching to a different network, or using a VPN. You should also check your firewall settings and make sure they are not blocking Roblox from accessing the internet.
            • -
            • Clear your cache and cookies if it runs slowly or has glitches. Cache and cookies are temporary files that store information about your app usage and preferences. However, they can also accumulate over time and cause the app to run slowly or have glitches. To fix this, you can clear your cache and cookies by going to Settings > Apps > Roblox > Storage > Clear Cache/Clear Data.
            • -
            -

            If none of these solutions work for you, you can also contact Roblox support by going to https://en.help.roblox.com/hc/en-us or sending an email to info@roblox.com. They will help you resolve any issues or answer any questions you might have about Roblox.

            -

            How to get the most out of Roblox on Android devices

            -

            Roblox is a platform where you can create, share, and play millions of games. But how can you get the most out of it on your Android device? Here are some tips and tricks that will help you enjoy Roblox even more:

            -
              -
            • Customize your avatar with tons of accessories and outfits. You can make your avatar look unique and cool by changing its appearance and clothing. You can find many items in the catalog that you can buy with Robux, the virtual currency of Roblox. You can also earn Robux by creating games or selling items.
            • -
            • Chat and play with your friends and millions of other players. You can join any game on Roblox and meet new people from different countries and cultures. You can also chat with them using text or voice messages, or join groups that share your interests.
            • -
            • Learn coding and game design skills with Roblox Studio. You can use Roblox Studio, a free tool that lets you create your own games and experiences on Roblox. You can use simple drag-and-drop tools or more advanced coding languages to design your games. You can also access free educational resources and tutorials to help you learn coding and game design skills.
            • -
            -

            Roblox is a platform where you can have fun, express yourself, and learn new skills. By following these tips and tricks, you can get the most out of Roblox on your Android device.

            -

            Conclusion

            -

            In this article, we have shown you how to download and install the latest Roblox update for your Android device, as well as what's new in the update, how to troubleshoot common issues, and how to get the most out of Roblox on your device. We hope you found this article helpful and informative.

            -

            Roblox is a platform where you can create, share, and play millions of games. It's not just a game, but a metaverse where you can be anything you can imagine. Whether you want to go on an epic adventure, compete against rivals, or just hang out with friends, there's something for everyone on Roblox.

            -

            If you're an Android user, don't miss out on the latest Roblox update. Download it now and enjoy the endless possibilities of Roblox on your mobile device.

            -

            FAQs

            -

            Here are some of the frequently asked questions about Roblox and the latest update for Android devices:

            -
              -
            1. Is Roblox free to download and play?
            2. -

              Yes, Roblox is free to download and play. However, some games and items may require Robux, the virtual currency of Roblox, to access or purchase. You can buy Robux with real money or earn them by creating games or selling items.

              -
            3. Is Roblox safe for kids?
            4. -

              Roblox is committed to providing a safe and fun environment for everyone. Roblox has various safety features and settings that parents and guardians can use to monitor and control their children's activity on Roblox. These include account age verification, chat filters, reporting and blocking systems, parental controls, and more. You can learn more about Roblox's safety measures by visiting https://en.help.roblox.com/hc/en-us/articles/203312120-Safety-Features-Chat-Privacy-Filtering.

              -
            5. How do I update Roblox on my Android device?
            6. -

              You can update Roblox on your Android device by following these steps:

              -
                -
              1. Open the Google Play Store app on your device.
              2. -
              3. Tap on the menu icon (three horizontal lines) on the top left corner of the screen.
              4. -
              5. Tap on My apps & games.
              6. -
              7. Find Roblox on the list of apps that need updates.
              8. -
              9. Tap on Update.
              10. -
              -

              You can also enable auto-update for Roblox by tapping on the three dots icon on the top right corner of the app page and toggling on Auto-update.

              -
            7. What are some of the best games to play on Roblox?
            8. -

              There are millions of games to play on Roblox, covering a variety of genres and themes. Some of the most popular and recommended games are:

              -
                -
              • Adopt Me!: A game where you can adopt pets, design your home, explore a magical world, and more.
              • -
              • Brookhaven RP: A game where you can roleplay as different characters, build your own house, drive vehicles, and more.
              • -
              • Murder Mystery 2: A game where you can be a murderer, a sheriff, or an innocent in a thrilling mystery.
              • -
              • Tower of Hell: A game where you have to climb a tower of challenging obstacles without falling.
              • -
              • Piggy: A game where you have to escape from a piggy that is trying to kill you.
              • -
              -
            9. How do I contact Roblox support?
            10. -

              If you have any questions or issues with Roblox, you can contact Roblox support by going to https://en.help.roblox.com/hc/en-us or sending an email to info@roblox.com. They will help you resolve any problems or answer any questions you might have about Roblox.

              -
            -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Undawn APK and Experience a Post-Apocalyptic Open World RPG.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Undawn APK and Experience a Post-Apocalyptic Open World RPG.md deleted file mode 100644 index 8a65ac61c94118af4ea848594c7ca33bbf0257be..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Undawn APK and Experience a Post-Apocalyptic Open World RPG.md +++ /dev/null @@ -1,121 +0,0 @@ -
            -

            What is Undawn APK and why you should download it

            -

            If you are looking for a new and exciting game to play on your Android device or PC, you might want to check out Undawn APK. Undawn APK is a free-to-play open-world survival RPG developed by Tencent and Lightspeed & Quantum Studios. It is set in a post-apocalyptic world where zombies and other dangerous creatures roam freely. You have to explore, adapt, and survive in this hostile environment with other players. You can also team up with your friends or other survivors to complete missions, fight enemies, and build your shelter.

            -

            undown apk


            Download ––– https://gohhs.com/2uPsmo



            -

            In this article, we will tell you everything you need to know about Undawn APK, including how to download and install it on your Android device or PC, how to play it, what are its features, and what are its pros and cons. We will also answer some frequently asked questions about Undawn APK at the end. So, without further ado, let's get started!

            -

            How to download and install Undawn APK on your Android device

            -

            Downloading and installing Undawn APK on your Android device is very easy. Just follow these simple steps:

            -
              -
            1. Go to Uptodown.com from your browser and search for "Undawn".
            2. -
            3. Select the latest version of Undawn APK from the list of results and tap on "Download".
            4. -
            5. Wait for the download to finish. You will need about 353 MB of free space on your device.
            6. -
            7. Once the download is complete, open the file manager app on your device and locate the downloaded file. It should be in the "Downloads" folder.
            8. -
            9. Tap on the file and allow the installation from unknown sources if prompted.
            10. -
            11. Follow the on-screen instructions to install Undawn APK on your device.
            12. -
            13. After the installation is done, you will need to download an additional 4.5 GB of data before starting your first game. Make sure you have a stable internet connection for this.
            14. -
            15. Enjoy playing Undawn APK on your Android device!
            16. -
            -

            How to play Undawn APK on your PC with an emulator

            -

            If you prefer playing games on a bigger screen or with a keyboard and mouse, you can also play Undawn APK on your PC with an emulator. An emulator is a software that allows you to run Android apps on your PC. There are many emulators available online, but we recommend using BlueStacks, which is one of the most popular and reliable ones. Here's how to play Undawn APK on your PC with BlueStacks:

            -

            undown apk download
            -undown apk uptodown
            -undown apk mod
            -undown apk obb
            -undown apk latest version
            -undown apk for android
            -undown apk for pc
            -undown apk free download
            -undown apk offline
            -undown apk data
            -undown apk full version
            -undown apk hack
            -undown apk unlimited money
            -undown apk revdl
            -undown apk rexdl
            -undown apk pure
            -undown apk mirror
            -undown apk update
            -undown apk beta
            -undown apk global version
            -undown apk english version
            -undown apk chinese version
            -undown apk gameplay
            -undown apk review
            -undown apk size
            -undown apk requirements
            -undown apk features
            -undown apk tips and tricks
            -undown apk cheats
            -undown apk guide
            -undown apk walkthrough
            -undown apk tutorial
            -undown apk best settings
            -undown apk best weapons
            -undown apk best characters
            -undown apk best skills
            -undown apk best builds
            -undown apk best team
            -undown apk best strategy
            -undown apk best graphics
            -undown apk vs pubg mobile lite
            -undown apk vs cod mobile
            -undown apk vs free fire
            -undown apk vs fortnite
            -undown apk vs genshin impact
            -undown apkp vs cyber hunter
            -how to install undawn apkp
            -how to play unawn apkp

            -
              -
            1. Download BlueStacks from its official website and install it on your PC.
            2. -
            3. Launch BlueStacks and sign in with your Google account.
            4. -
            5. Go to Uptodown.com from BlueStacks' browser and search for "Undawn".
            6. -
            7. Select the latest version of Undawn APK from the list of results and tap on "Download".
            8. -
            9. Wait for the download to finish. You will need about 353 MB of free space on your PC.
            10. -
            11. Once the download is complete, open BlueStacks' file manager app and locate the downloaded file. It should be in the "Downloads" folder.
            12. -
            13. Tap on the file and allow the installation from unknown sources if prompted.
            14. -
            15. Follow the on-screen instructions to install Undawn APK on your PC.
            16. -
            17. After the installation is done, you will need to download an additional 4.5 GB of data before starting your first game. Make sure you have a stable internet connection for this.
            18. -
            19. Enjoy playing Undawn APK on your PC with BlueStacks!
            20. -
            -

            The gameplay and features of Undawn APK

            -

            Now that you know how to download and install Undawn APK on your device or PC, let's take a look at what makes this game so fun and addictive. Undawn APK is a game that combines the elements of open-world survival RPG and zombie shooter. You can explore a vast and dynamic world full of dangers and opportunities, interact with other players and NPCs, craft and upgrade your weapons and items, and fight against hordes of zombies and other enemies. Here are some of the gameplay and features of Undawn APK that you should know:

            -

            The open-world survival RPG genre and the post-apocalyptic setting

            -

            Undawn APK is a game that belongs to the open-world survival RPG genre, which means that you have the freedom to roam around a large and diverse map, collect resources, build structures, and complete quests. You can also customize your character's appearance, skills, and equipment according to your preferences. The game is set in a post-apocalyptic world where a mysterious virus has turned most of the population into zombies. You have to survive in this harsh environment by scavenging for food, water, medicine, and other supplies, as well as avoiding or fighting the undead and other hostile forces.

            -

            The main characters and factions in the game

            -

            Undawn APK has a rich and immersive story that involves different characters and factions that you can interact with. You can choose to play as one of the four main characters: Jake, Rose, Leo, or Zoe. Each character has their own backstory, personality, and abilities that affect their gameplay. You can also join one of the three factions in the game: The Rangers, The Outcasts, or The Revenants. Each faction has its own ideology, goals, allies, and enemies that influence your choices and consequences. You can also switch factions at any time if you want to experience different perspectives.

            -

            The combat system and the weapons and items available

            -

            Undawn APK has a fast-paced and thrilling combat system that requires you to use your skills, strategy, and reflexes to survive. You can use various weapons and items to fight against zombies and other enemies, such as guns, melee weapons, grenades, traps, drones, vehicles, and more. You can also craft and upgrade your weapons and items using the materials you find or trade in the game. You can also use stealth, cover, and environmental elements to gain an advantage in combat. You have to be careful though, as the zombies are not the only threat you face. There are also human enemies such as bandits, raiders, mercenaries, and rival factions that will try to kill you or loot your resources.

            -

            The cooperative mode and the cross-platform support

            -

            Undawn APK is not only a solo game but also a cooperative game that allows you to team up with your friends or other players online. You can join or create a squad of up to four players and work together to complete missions, raid enemy bases, defend your shelter, or just explore the world. You can also chat with your teammates using voice or text messages. Undawn APK also supports cross-platform play between Android devices and PCs, which means that you can play with anyone regardless of their device type.

            -

            The graphics and sound quality of the game

            -

            Undawn APK is a game that boasts impressive graphics and sound quality that enhance your gaming experience. The game uses Unreal Engine 4, which is one of the most advanced game engines in the industry. The game features realistic lighting effects, shadows, textures, animations, physics, and weather systems that create a stunning visual presentation. The game also has high-quality sound effects, music, and voice acting that create a immersive audio experience. The game also runs smoothly on most devices without lagging or crashing.

            -

            The pros and cons of Undawn APK

            -

            Undawn APK is a game that has many pros and cons that you should consider before playing it. Here are some of them:

            -

            The pros of Undawn APK

            -
              -
            • It is a free-to-play game that does not require any purchase or subscription to enjoy.
            • -
            • It is a game that offers a lot of variety and content in terms of gameplay, story, characters, factions, weapons, items, and missions.
            • -
            • It is a game that has a high-quality graphics and sound that create a realistic and immersive atmosphere.
            • -
            • It is a game that supports cooperative and cross-platform play that allows you to have fun with your friends or other players online.
            • -
            -

            The cons of Undawn APK

            -
              -
            • It is a game that requires a lot of storage space and data to download and play. You will need about 4.8 GB of free space on your device or PC, and a stable internet connection to play online.
            • -
            • It is a game that may have some bugs and glitches that affect the performance and gameplay. The game is still in development and may not be fully optimized or stable.
            • -
            • It is a game that may have some violence and gore that may not be suitable for younger or sensitive players. The game involves killing zombies and other enemies with various weapons and seeing blood and body parts.
            • -
            -

            Conclusion and FAQs

            -

            Undawn APK is a game that offers a unique and exciting experience of surviving in a post-apocalyptic world with zombies and other dangers. You can explore, adapt, and fight in this open-world survival RPG with your friends or other players online. You can also customize your character, join factions, craft weapons, and complete missions. The game has amazing graphics and sound quality that make you feel like you are in the game. The game is free-to-play and supports cross-platform play between Android devices and PCs. However, the game also has some drawbacks, such as the large file size, the online requirement, and the possible bugs and glitches. The game may also not be suitable for younger or sensitive players due to the violence and gore. Overall, Undawn APK is a game that we recommend you to try out if you are looking for a new and thrilling game to play on your device or PC.

            -

            Here are some FAQs about Undawn APK that you may have:

            -

            Q: Is Undawn APK safe to download and install?

            -

            A: Yes, Undawn APK is safe to download and install from Uptodown.com, which is a trusted website that provides verified and secure APK files. However, you should always be careful when downloading APK files from unknown sources, as they may contain malware or viruses that can harm your device or PC.

            -

            Q: Is Undawn APK available for iOS devices?

            -

            A: No, Undawn APK is not available for iOS devices at the moment. The game is only compatible with Android devices (version 5.0 or higher) and PCs (Windows 7 or higher). However, the developers may release an iOS version in the future, so stay tuned for updates.

            -

            Q: How can I update Undawn APK to the latest version?

            -

            A: You can update Undawn APK to the latest version by following the same steps as downloading and installing it. Just go to Uptodown.com from your browser and search for "Undawn". Select the latest version of Undawn APK from the list of results and tap on "Download". Wait for the download to finish and open the file manager app on your device or PC. Tap on the file and follow the on-screen instructions to install the update. You may need to download additional data before playing the updated version.

            -

            Q: How can I contact the developers of Undawn APK?

            -

            A: You can contact the developers of Undawn APK by visiting their official website or their Facebook page. You can also send them an email at undawngame@gmail.com. You can also report any bugs or issues you encounter in the game by using the feedback option in the game settings.

            -

            Q: How can I support the developers of Undawn APK?

            -

            A: You can support the developers of Undawn APK by playing their game, rating it, reviewing it, sharing it with your friends, and giving them feedback. You can also purchase some in-game items or currency with real money if you want to enhance your gameplay or support their work.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/misc.py b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/misc.py deleted file mode 100644 index d64b84ef24bea0c98e76824feb1903f6bfebe7a5..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/misc.py +++ /dev/null @@ -1,717 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Misc functions, including distributed helpers. - -Mostly copy-paste from torchvision references. -""" -import colorsys -import datetime -import functools -import io -import json -import os -import pickle -import subprocess -import time -from collections import OrderedDict, defaultdict, deque -from typing import List, Optional - -import numpy as np -import torch -import torch.distributed as dist - -# needed due to empty tensor bug in pytorch and torchvision 0.5 -import torchvision -from torch import Tensor - -__torchvision_need_compat_flag = float(torchvision.__version__.split(".")[1]) < 7 -if __torchvision_need_compat_flag: - from torchvision.ops import _new_empty_tensor - from torchvision.ops.misc import _output_size - - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda") - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - if d.shape[0] == 0: - return 0 - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - if os.environ.get("SHILONG_AMP", None) == "1": - eps = 1e-4 - else: - eps = 1e-6 - return self.total / (self.count + eps) - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value, - ) - - -@functools.lru_cache() -def _get_global_gloo_group(): - """ - Return a process group based on gloo backend, containing all the ranks - The result is cached. - """ - - if dist.get_backend() == "nccl": - return dist.new_group(backend="gloo") - - return dist.group.WORLD - - -def all_gather_cpu(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - - world_size = get_world_size() - if world_size == 1: - return [data] - - cpu_group = _get_global_gloo_group() - - buffer = io.BytesIO() - torch.save(data, buffer) - data_view = buffer.getbuffer() - device = "cuda" if cpu_group is None else "cpu" - tensor = torch.ByteTensor(data_view).to(device) - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device=device, dtype=torch.long) - size_list = [torch.tensor([0], device=device, dtype=torch.long) for _ in range(world_size)] - if cpu_group is None: - dist.all_gather(size_list, local_size) - else: - print("gathering on cpu") - dist.all_gather(size_list, local_size, group=cpu_group) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - assert isinstance(local_size.item(), int) - local_size = int(local_size.item()) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=device)) - if local_size != max_size: - padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=device) - tensor = torch.cat((tensor, padding), dim=0) - if cpu_group is None: - dist.all_gather(tensor_list, tensor) - else: - dist.all_gather(tensor_list, tensor, group=cpu_group) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - tensor = torch.split(tensor, [size, max_size - size], dim=0)[0] - buffer = io.BytesIO(tensor.cpu().numpy()) - obj = torch.load(buffer) - data_list.append(obj) - - return data_list - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - - if os.getenv("CPU_REDUCE") == "1": - return all_gather_cpu(data) - - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device="cuda") - size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda")) - if local_size != max_size: - padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda") - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that all processes - have the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.all_reduce(values) - if average: - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError("'{}' object has no attribute '{}'".format(type(self).__name__, attr)) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - # print(name, str(meter)) - # import ipdb;ipdb.set_trace() - if meter.count > 0: - loss_str.append("{}: {}".format(name, str(meter))) - return self.delimiter.join(loss_str) - - def synchronize_between_processes(self): - for meter in self.meters.values(): - meter.synchronize_between_processes() - - def add_meter(self, name, meter): - self.meters[name] = meter - - def log_every(self, iterable, print_freq, header=None, logger=None): - if logger is None: - print_func = print - else: - print_func = logger.info - - i = 0 - if not header: - header = "" - start_time = time.time() - end = time.time() - iter_time = SmoothedValue(fmt="{avg:.4f}") - data_time = SmoothedValue(fmt="{avg:.4f}") - space_fmt = ":" + str(len(str(len(iterable)))) + "d" - if torch.cuda.is_available(): - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - "max mem: {memory:.0f}", - ] - ) - else: - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - ] - ) - MB = 1024.0 * 1024.0 - for obj in iterable: - data_time.update(time.time() - end) - yield obj - # import ipdb; ipdb.set_trace() - iter_time.update(time.time() - end) - if i % print_freq == 0 or i == len(iterable) - 1: - eta_seconds = iter_time.global_avg * (len(iterable) - i) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - if torch.cuda.is_available(): - print_func( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - memory=torch.cuda.max_memory_allocated() / MB, - ) - ) - else: - print_func( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - ) - ) - i += 1 - end = time.time() - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print_func( - "{} Total time: {} ({:.4f} s / it)".format( - header, total_time_str, total_time / len(iterable) - ) - ) - - -def get_sha(): - cwd = os.path.dirname(os.path.abspath(__file__)) - - def _run(command): - return subprocess.check_output(command, cwd=cwd).decode("ascii").strip() - - sha = "N/A" - diff = "clean" - branch = "N/A" - try: - sha = _run(["git", "rev-parse", "HEAD"]) - subprocess.check_output(["git", "diff"], cwd=cwd) - diff = _run(["git", "diff-index", "HEAD"]) - diff = "has uncommited changes" if diff else "clean" - branch = _run(["git", "rev-parse", "--abbrev-ref", "HEAD"]) - except Exception: - pass - message = f"sha: {sha}, status: {diff}, branch: {branch}" - return message - - -def collate_fn(batch): - # import ipdb; ipdb.set_trace() - batch = list(zip(*batch)) - batch[0] = nested_tensor_from_tensor_list(batch[0]) - return tuple(batch) - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - if mask == "auto": - self.mask = torch.zeros_like(tensors).to(tensors.device) - if self.mask.dim() == 3: - self.mask = self.mask.sum(0).to(bool) - elif self.mask.dim() == 4: - self.mask = self.mask.sum(1).to(bool) - else: - raise ValueError( - "tensors dim must be 3 or 4 but {}({})".format( - self.tensors.dim(), self.tensors.shape - ) - ) - - def imgsize(self): - res = [] - for i in range(self.tensors.shape[0]): - mask = self.mask[i] - maxH = (~mask).sum(0).max() - maxW = (~mask).sum(1).max() - res.append(torch.Tensor([maxH, maxW])) - return res - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def to_img_list_single(self, tensor, mask): - assert tensor.dim() == 3, "dim of tensor should be 3 but {}".format(tensor.dim()) - maxH = (~mask).sum(0).max() - maxW = (~mask).sum(1).max() - img = tensor[:, :maxH, :maxW] - return img - - def to_img_list(self): - """remove the padding and convert to img list - - Returns: - [type]: [description] - """ - if self.tensors.dim() == 3: - return self.to_img_list_single(self.tensors, self.mask) - else: - res = [] - for i in range(self.tensors.shape[0]): - tensor_i = self.tensors[i] - mask_i = self.mask[i] - res.append(self.to_img_list_single(tensor_i, mask_i)) - return res - - @property - def device(self): - return self.tensors.device - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - @property - def shape(self): - return {"tensors.shape": self.tensors.shape, "mask.shape": self.mask.shape} - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - if torchvision._is_tracing(): - # nested_tensor_from_tensor_list() does not export well to ONNX - # call _onnx_nested_tensor_from_tensor_list() instead - return _onnx_nested_tensor_from_tensor_list(tensor_list) - - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], : img.shape[2]] = False - else: - raise ValueError("not supported") - return NestedTensor(tensor, mask) - - -# _onnx_nested_tensor_from_tensor_list() is an implementation of -# nested_tensor_from_tensor_list() that is supported by ONNX tracing. -@torch.jit.unused -def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor: - max_size = [] - for i in range(tensor_list[0].dim()): - max_size_i = torch.max( - torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32) - ).to(torch.int64) - max_size.append(max_size_i) - max_size = tuple(max_size) - - # work around for - # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - # m[: img.shape[1], :img.shape[2]] = False - # which is not yet supported in onnx - padded_imgs = [] - padded_masks = [] - for img in tensor_list: - padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))] - padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0])) - padded_imgs.append(padded_img) - - m = torch.zeros_like(img[0], dtype=torch.int, device=img.device) - padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1) - padded_masks.append(padded_mask.to(torch.bool)) - - tensor = torch.stack(padded_imgs) - mask = torch.stack(padded_masks) - - return NestedTensor(tensor, mask=mask) - - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop("force", False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def save_on_master(*args, **kwargs): - if is_main_process(): - torch.save(*args, **kwargs) - - -def init_distributed_mode(args): - if "WORLD_SIZE" in os.environ and os.environ["WORLD_SIZE"] != "": # 'RANK' in os.environ and - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ["WORLD_SIZE"]) - args.gpu = args.local_rank = int(os.environ["LOCAL_RANK"]) - - # launch by torch.distributed.launch - # Single node - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 1 --rank 0 ... - # Multi nodes - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 0 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ... - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 1 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ... - # args.rank = int(os.environ.get('OMPI_COMM_WORLD_RANK')) - # local_world_size = int(os.environ['GPU_PER_NODE_COUNT']) - # args.world_size = args.world_size * local_world_size - # args.gpu = args.local_rank = int(os.environ['LOCAL_RANK']) - # args.rank = args.rank * local_world_size + args.local_rank - print( - "world size: {}, rank: {}, local rank: {}".format( - args.world_size, args.rank, args.local_rank - ) - ) - print(json.dumps(dict(os.environ), indent=2)) - elif "SLURM_PROCID" in os.environ: - args.rank = int(os.environ["SLURM_PROCID"]) - args.gpu = args.local_rank = int(os.environ["SLURM_LOCALID"]) - args.world_size = int(os.environ["SLURM_NPROCS"]) - - print( - "world size: {}, world rank: {}, local rank: {}, device_count: {}".format( - args.world_size, args.rank, args.local_rank, torch.cuda.device_count() - ) - ) - else: - print("Not using distributed mode") - args.distributed = False - args.world_size = 1 - args.rank = 0 - args.local_rank = 0 - return - - print("world_size:{} rank:{} local_rank:{}".format(args.world_size, args.rank, args.local_rank)) - args.distributed = True - torch.cuda.set_device(args.local_rank) - args.dist_backend = "nccl" - print("| distributed init (rank {}): {}".format(args.rank, args.dist_url), flush=True) - - torch.distributed.init_process_group( - backend=args.dist_backend, - world_size=args.world_size, - rank=args.rank, - init_method=args.dist_url, - ) - - print("Before torch.distributed.barrier()") - torch.distributed.barrier() - print("End torch.distributed.barrier()") - setup_for_distributed(args.rank == 0) - - -@torch.no_grad() -def accuracy(output, target, topk=(1,)): - """Computes the precision@k for the specified values of k""" - if target.numel() == 0: - return [torch.zeros([], device=output.device)] - maxk = max(topk) - batch_size = target.size(0) - - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - - res = [] - for k in topk: - correct_k = correct[:k].view(-1).float().sum(0) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - - -@torch.no_grad() -def accuracy_onehot(pred, gt): - """_summary_ - - Args: - pred (_type_): n, c - gt (_type_): n, c - """ - tp = ((pred - gt).abs().sum(-1) < 1e-4).float().sum() - acc = tp / gt.shape[0] * 100 - return acc - - -def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None): - # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor - """ - Equivalent to nn.functional.interpolate, but with support for empty batch sizes. - This will eventually be supported natively by PyTorch, and this - class can go away. - """ - if __torchvision_need_compat_flag < 0.7: - if input.numel() > 0: - return torch.nn.functional.interpolate(input, size, scale_factor, mode, align_corners) - - output_shape = _output_size(2, input, size, scale_factor) - output_shape = list(input.shape[:-2]) + list(output_shape) - return _new_empty_tensor(input, output_shape) - else: - return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners) - - -class color_sys: - def __init__(self, num_colors) -> None: - self.num_colors = num_colors - colors = [] - for i in np.arange(0.0, 360.0, 360.0 / num_colors): - hue = i / 360.0 - lightness = (50 + np.random.rand() * 10) / 100.0 - saturation = (90 + np.random.rand() * 10) / 100.0 - colors.append( - tuple([int(j * 255) for j in colorsys.hls_to_rgb(hue, lightness, saturation)]) - ) - self.colors = colors - - def __call__(self, idx): - return self.colors[idx] - - -def inverse_sigmoid(x, eps=1e-3): - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -def clean_state_dict(state_dict): - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k[:7] == "module.": - k = k[7:] # remove `module.` - new_state_dict[k] = v - return new_state_dict diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/module.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/module.d.ts deleted file mode 100644 index 5a60a5fa29e75ed431d23a4c69c8b6e865bf8ded..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/module.d.ts +++ /dev/null @@ -1,115 +0,0 @@ -/** - * @since v0.3.7 - */ -declare module 'module' { - import { URL } from 'node:url'; - namespace Module { - /** - * The `module.syncBuiltinESMExports()` method updates all the live bindings for - * builtin `ES Modules` to match the properties of the `CommonJS` exports. It - * does not add or remove exported names from the `ES Modules`. - * - * ```js - * const fs = require('fs'); - * const assert = require('assert'); - * const { syncBuiltinESMExports } = require('module'); - * - * fs.readFile = newAPI; - * - * delete fs.readFileSync; - * - * function newAPI() { - * // ... - * } - * - * fs.newAPI = newAPI; - * - * syncBuiltinESMExports(); - * - * import('fs').then((esmFS) => { - * // It syncs the existing readFile property with the new value - * assert.strictEqual(esmFS.readFile, newAPI); - * // readFileSync has been deleted from the required fs - * assert.strictEqual('readFileSync' in fs, false); - * // syncBuiltinESMExports() does not remove readFileSync from esmFS - * assert.strictEqual('readFileSync' in esmFS, true); - * // syncBuiltinESMExports() does not add names - * assert.strictEqual(esmFS.newAPI, undefined); - * }); - * ``` - * @since v12.12.0 - */ - function syncBuiltinESMExports(): void; - /** - * `path` is the resolved path for the file for which a corresponding source map - * should be fetched. - * @since v13.7.0, v12.17.0 - */ - function findSourceMap(path: string, error?: Error): SourceMap; - interface SourceMapPayload { - file: string; - version: number; - sources: string[]; - sourcesContent: string[]; - names: string[]; - mappings: string; - sourceRoot: string; - } - interface SourceMapping { - generatedLine: number; - generatedColumn: number; - originalSource: string; - originalLine: number; - originalColumn: number; - } - /** - * @since v13.7.0, v12.17.0 - */ - class SourceMap { - /** - * Getter for the payload used to construct the `SourceMap` instance. - */ - readonly payload: SourceMapPayload; - constructor(payload: SourceMapPayload); - /** - * Given a line number and column number in the generated source file, returns - * an object representing the position in the original file. The object returned - * consists of the following keys: - */ - findEntry(line: number, column: number): SourceMapping; - } - } - interface Module extends NodeModule {} - class Module { - static runMain(): void; - static wrap(code: string): string; - static createRequire(path: string | URL): NodeRequire; - static builtinModules: string[]; - static isBuiltin(moduleName: string): boolean; - static Module: typeof Module; - constructor(id: string, parent?: Module); - } - global { - interface ImportMeta { - url: string; - /** - * @experimental - * This feature is only available with the `--experimental-import-meta-resolve` - * command flag enabled. - * - * Provides a module-relative resolution function scoped to each module, returning - * the URL string. - * - * @param specified The module specifier to resolve relative to `parent`. - * @param parent The absolute parent module URL to resolve from. If none - * is specified, the value of `import.meta.url` is used as the default. - */ - resolve?(specified: string, parent?: string | URL): Promise; - } - } - export = Module; -} -declare module 'node:module' { - import module = require('module'); - export = module; -} diff --git a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpgd.h b/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpgd.h deleted file mode 100644 index a1c0cac61839a6f66a42c341f50d5e36faad9a93..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpgd.h +++ /dev/null @@ -1,316 +0,0 @@ -// jpgd.h - C++ class for JPEG decompression. -// Public domain, Rich Geldreich -#ifndef JPEG_DECODER_H -#define JPEG_DECODER_H - -#include -#include -#include - -namespace jpgd -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef unsigned short uint16; - typedef unsigned int uint; - typedef signed int int32; - - // Loads a JPEG image from a memory buffer or a file. - // req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA). - // On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB). - // Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly. - // Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp. -// BEGIN EPIC MOD -//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps); - unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format); -// END EPIC MOD - unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps); - - // Success/failure error codes. - enum jpgd_status - { - JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1, - JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE, - JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS, - JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH, - JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER, - JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS, - JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE, - JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR, - JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM - }; - - // Input stream interface. - // Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available. - // The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set. - // It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer. - // Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding. - class jpeg_decoder_stream - { - public: - jpeg_decoder_stream() { } - virtual ~jpeg_decoder_stream() { } - - // The read() method is called when the internal input buffer is empty. - // Parameters: - // pBuf - input buffer - // max_bytes_to_read - maximum bytes that can be written to pBuf - // pEOF_flag - set this to true if at end of stream (no more bytes remaining) - // Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0). - // Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full. - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0; - }; - - // stdio FILE stream class. - class jpeg_decoder_file_stream : public jpeg_decoder_stream - { - jpeg_decoder_file_stream(const jpeg_decoder_file_stream &); - jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &); - - FILE *m_pFile; - bool m_eof_flag, m_error_flag; - - public: - jpeg_decoder_file_stream(); - virtual ~jpeg_decoder_file_stream(); - - bool open(const char *Pfilename); - void close(); - - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag); - }; - - // Memory stream class. - class jpeg_decoder_mem_stream : public jpeg_decoder_stream - { - const uint8 *m_pSrc_data; - uint m_ofs, m_size; - - public: - jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { } - jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { } - - virtual ~jpeg_decoder_mem_stream() { } - - bool open(const uint8 *pSrc_data, uint size); - void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; } - - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag); - }; - - // Loads JPEG file from a jpeg_decoder_stream. - unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps); - - enum - { - JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4, - JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384 - }; - - typedef int16 jpgd_quant_t; - typedef int16 jpgd_block_t; - - class jpeg_decoder - { - public: - // Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc. - // methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline. - jpeg_decoder(jpeg_decoder_stream *pStream); - - ~jpeg_decoder(); - - // Call this method after constructing the object to begin decompression. - // If JPGD_SUCCESS is returned you may then call decode() on each scanline. - int begin_decoding(); - - // Returns the next scan line. - // For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1). - // Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4). - // Returns JPGD_SUCCESS if a scan line has been returned. - // Returns JPGD_DONE if all scan lines have been returned. - // Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info. - int decode(const void** pScan_line, uint* pScan_line_len); - - inline jpgd_status get_error_code() const { return m_error_code; } - - inline int get_width() const { return m_image_x_size; } - inline int get_height() const { return m_image_y_size; } - - inline int get_num_components() const { return m_comps_in_frame; } - - inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; } - inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); } - - // Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file). - inline int get_total_bytes_read() const { return m_total_bytes_read; } - - private: - jpeg_decoder(const jpeg_decoder &); - jpeg_decoder &operator =(const jpeg_decoder &); - - typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int); - - struct huff_tables - { - bool ac_table; - uint look_up[256]; - uint look_up2[256]; - uint8 code_size[256]; - uint tree[512]; - }; - - struct coeff_buf - { - uint8 *pData; - int block_num_x, block_num_y; - int block_len_x, block_len_y; - int block_size; - }; - - struct mem_block - { - mem_block *m_pNext; - size_t m_used_count; - size_t m_size; - char m_data[1]; - }; - - jmp_buf m_jmp_state; - mem_block *m_pMem_blocks; - int m_image_x_size; - int m_image_y_size; - jpeg_decoder_stream *m_pStream; - int m_progressive_flag; - uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES]; - uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size - uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size - jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables - int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported) - int m_comps_in_frame; // # of components in frame - int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor - int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor - int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector - int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID - int m_comp_h_blocks[JPGD_MAX_COMPONENTS]; - int m_comp_v_blocks[JPGD_MAX_COMPONENTS]; - int m_comps_in_scan; // # of components in scan - int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan - int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector - int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector - int m_spectral_start; // spectral selection start - int m_spectral_end; // spectral selection end - int m_successive_low; // successive approximation low - int m_successive_high; // successive approximation high - int m_max_mcu_x_size; // MCU's max. X size in pixels - int m_max_mcu_y_size; // MCU's max. Y size in pixels - int m_blocks_per_mcu; - int m_max_blocks_per_row; - int m_mcus_per_row, m_mcus_per_col; - int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU]; - int m_total_lines_left; // total # lines left in image - int m_mcu_lines_left; // total # lines left in this MCU - int m_real_dest_bytes_per_scan_line; - int m_dest_bytes_per_scan_line; // rounded up - int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y) - huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES]; - coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS]; - coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS]; - int m_eob_run; - int m_block_y_mcu[JPGD_MAX_COMPONENTS]; - uint8* m_pIn_buf_ofs; - int m_in_buf_left; - int m_tem_flag; - bool m_eof_flag; - uint8 m_in_buf_pad_start[128]; - uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128]; - uint8 m_in_buf_pad_end[128]; - int m_bits_left; - uint m_bit_buf; - int m_restart_interval; - int m_restarts_left; - int m_next_restart_num; - int m_max_mcus_per_row; - int m_max_blocks_per_mcu; - int m_expanded_blocks_per_mcu; - int m_expanded_blocks_per_row; - int m_expanded_blocks_per_component; - bool m_freq_domain_chroma_upsample; - int m_max_mcus_per_col; - uint m_last_dc_val[JPGD_MAX_COMPONENTS]; - jpgd_block_t* m_pMCU_coefficients; - int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU]; - uint8* m_pSample_buf; - int m_crr[256]; - int m_cbb[256]; - int m_crg[256]; - int m_cbg[256]; - uint8* m_pScan_line_0; - uint8* m_pScan_line_1; - jpgd_status m_error_code; - bool m_ready_flag; - int m_total_bytes_read; - - void free_all_blocks(); - // BEGIN EPIC MOD - UE_NORETURN void stop_decoding(jpgd_status status); - // END EPIC MOD - void *alloc(size_t n, bool zero = false); - void word_clear(void *p, uint16 c, uint n); - void prep_in_buffer(); - void read_dht_marker(); - void read_dqt_marker(); - void read_sof_marker(); - void skip_variable_marker(); - void read_dri_marker(); - void read_sos_marker(); - int next_marker(); - int process_markers(); - void locate_soi_marker(); - void locate_sof_marker(); - int locate_sos_marker(); - void init(jpeg_decoder_stream * pStream); - void create_look_ups(); - void fix_in_buffer(); - void transform_mcu(int mcu_row); - void transform_mcu_expand(int mcu_row); - coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y); - inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y); - void load_next_row(); - void decode_next_row(); - void make_huff_table(int index, huff_tables *pH); - void check_quant_tables(); - void check_huff_tables(); - void calc_mcu_block_order(); - int init_scan(); - void init_frame(); - void process_restart(); - void decode_scan(pDecode_block_func decode_block_func); - void init_progressive(); - void init_sequential(); - void decode_start(); - void decode_init(jpeg_decoder_stream * pStream); - void H2V2Convert(); - void H2V1Convert(); - void H1V2Convert(); - void H1V1Convert(); - void gray_convert(); - void expanded_convert(); - void find_eoi(); - inline uint get_char(); - inline uint get_char(bool *pPadding_flag); - inline void stuff_char(uint8 q); - inline uint8 get_octet(); - inline uint get_bits(int num_bits); - inline uint get_bits_no_markers(int numbits); - inline int huff_decode(huff_tables *pH); - inline int huff_decode(huff_tables *pH, int& extrabits); - static inline uint8 clamp(int i); - static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y); - }; - -} // namespace jpgd - -#endif // JPEG_DECODER_H diff --git a/spaces/frapochetti/blurry-faces/README.md b/spaces/frapochetti/blurry-faces/README.md deleted file mode 100644 index 196a4d3338d03fc03c5b508fdf8643921bddcd04..0000000000000000000000000000000000000000 --- a/spaces/frapochetti/blurry-faces/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Blurry Faces -emoji: 🙈 -colorFrom: pink -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/freddyaboulton/gradio_folium/app.py b/spaces/freddyaboulton/gradio_folium/app.py deleted file mode 100644 index 9135f09a8f24e4a54f0084d93780c9d216efc28e..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio_folium/app.py +++ /dev/null @@ -1,21 +0,0 @@ - -import gradio as gr -from gradio_folium import Folium -from folium import Map -import pandas as pd -import pathlib - -df = pd.read_csv(pathlib.Path(__file__).parent / "cities.csv") - -def select(df, data: gr.SelectData): - row = df.iloc[data.index[0], :] - return Map(location=[row['Latitude'], row['Longitude']]) - -with gr.Blocks() as demo: - gr.Markdown(("# 🗺️ Explore World Capitals with Gradio and Folium\n" - "Install this custom component with `pip install gradio_folium`")) - map = Folium(value=Map(location=[25.7617, -80.1918]), height=400) - data = gr.DataFrame(value=df, height=200) - data.select(select, data, map) - -demo.launch() diff --git a/spaces/generalHolmogorets/README/README.md b/spaces/generalHolmogorets/README/README.md deleted file mode 100644 index 895ddbb4cce1462efc334b6d3fd7b4820214b2fb..0000000000000000000000000000000000000000 --- a/spaces/generalHolmogorets/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🐠 -colorFrom: pink -colorTo: pink -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/iter_based_runner.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/iter_based_runner.py deleted file mode 100644 index 1df4de8c0285669dec9b014dfd1f3dd1600f0831..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/iter_based_runner.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch -from torch.optim import Optimizer - -import annotator.uniformer.mmcv as mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .hooks import IterTimerHook -from .utils import get_host_info - - -class IterLoader: - - def __init__(self, dataloader): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._epoch = 0 - - @property - def epoch(self): - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, 'set_epoch'): - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __len__(self): - return len(self._dataloader) - - -@RUNNERS.register_module() -class IterBasedRunner(BaseRunner): - """Iteration-based Runner. - - This runner train models iteration by iteration. - """ - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._epoch = data_loader.epoch - data_batch = next(data_loader) - self.call_hook('before_train_iter') - outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.train_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_train_iter') - self._inner_iter += 1 - self._iter += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - data_batch = next(data_loader) - self.call_hook('before_val_iter') - outputs = self.model.val_step(data_batch, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.val_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_val_iter') - self._inner_iter += 1 - - def run(self, data_loaders, workflow, max_iters=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, iters) to specify the - running order and iterations. E.g, [('train', 10000), - ('val', 1000)] means running 10000 iterations for training and - 1000 iterations for validation, iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_iters is not None: - warnings.warn( - 'setting max_iters in run is deprecated, ' - 'please set max_iters in runner_config', DeprecationWarning) - self._max_iters = max_iters - assert self._max_iters is not None, ( - 'max_iters must be specified during instantiation') - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d iters', workflow, - self._max_iters) - self.call_hook('before_run') - - iter_loaders = [IterLoader(x) for x in data_loaders] - - self.call_hook('before_epoch') - - while self.iter < self._max_iters: - for i, flow in enumerate(workflow): - self._inner_iter = 0 - mode, iters = flow - if not isinstance(mode, str) or not hasattr(self, mode): - raise ValueError( - 'runner has no method named "{}" to run a workflow'. - format(mode)) - iter_runner = getattr(self, mode) - for _ in range(iters): - if mode == 'train' and self.iter >= self._max_iters: - break - iter_runner(iter_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_epoch') - self.call_hook('after_run') - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - """Resume model from checkpoint. - - Args: - checkpoint (str): Checkpoint to resume from. - resume_optimizer (bool, optional): Whether resume the optimizer(s) - if the checkpoint file includes optimizer(s). Default to True. - map_location (str, optional): Same as :func:`torch.load`. - Default to 'default'. - """ - if map_location == 'default': - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - self._inner_iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}') - - def save_checkpoint(self, - out_dir, - filename_tmpl='iter_{}.pth', - meta=None, - save_optimizer=True, - create_symlink=True): - """Save checkpoint to file. - - Args: - out_dir (str): Directory to save checkpoint files. - filename_tmpl (str, optional): Checkpoint file template. - Defaults to 'iter_{}.pth'. - meta (dict, optional): Metadata to be saved in checkpoint. - Defaults to None. - save_optimizer (bool, optional): Whether save optimizer. - Defaults to True. - create_symlink (bool, optional): Whether create symlink to the - latest checkpoint file. Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.iter + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - custom_hooks_config=None): - """Register default hooks for iter-based training. - - Checkpoint hook, optimizer stepper hook and logger hooks will be set to - `by_epoch=False` by default. - - Default hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - if checkpoint_config is not None: - checkpoint_config.setdefault('by_epoch', False) - if lr_config is not None: - lr_config.setdefault('by_epoch', False) - if log_config is not None: - for info in log_config['hooks']: - info.setdefault('by_epoch', False) - super(IterBasedRunner, self).register_training_hooks( - lr_config=lr_config, - momentum_config=momentum_config, - optimizer_config=optimizer_config, - checkpoint_config=checkpoint_config, - log_config=log_config, - timer_config=IterTimerHook(), - custom_hooks_config=custom_hooks_config) diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pan/decoder.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pan/decoder.py deleted file mode 100644 index ab8f8675a1b5dc6ed3447aef63caeb7d96331529..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pan/decoder.py +++ /dev/null @@ -1,208 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class ConvBnRelu(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int, - stride: int = 1, - padding: int = 0, - dilation: int = 1, - groups: int = 1, - bias: bool = True, - add_relu: bool = True, - interpolate: bool = False, - ): - super(ConvBnRelu, self).__init__() - self.conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - bias=bias, - groups=groups, - ) - self.add_relu = add_relu - self.interpolate = interpolate - self.bn = nn.BatchNorm2d(out_channels) - self.activation = nn.ReLU(inplace=True) - - def forward(self, x): - x = self.conv(x) - x = self.bn(x) - if self.add_relu: - x = self.activation(x) - if self.interpolate: - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - return x - - -class FPABlock(nn.Module): - def __init__(self, in_channels, out_channels, upscale_mode="bilinear"): - super(FPABlock, self).__init__() - - self.upscale_mode = upscale_mode - if self.upscale_mode == "bilinear": - self.align_corners = True - else: - self.align_corners = False - - # global pooling branch - self.branch1 = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - ConvBnRelu( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - ), - ) - - # midddle branch - self.mid = nn.Sequential( - ConvBnRelu( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - ) - ) - self.down1 = nn.Sequential( - nn.MaxPool2d(kernel_size=2, stride=2), - ConvBnRelu( - in_channels=in_channels, - out_channels=1, - kernel_size=7, - stride=1, - padding=3, - ), - ) - self.down2 = nn.Sequential( - nn.MaxPool2d(kernel_size=2, stride=2), - ConvBnRelu( - in_channels=1, out_channels=1, kernel_size=5, stride=1, padding=2 - ), - ) - self.down3 = nn.Sequential( - nn.MaxPool2d(kernel_size=2, stride=2), - ConvBnRelu( - in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1 - ), - ConvBnRelu( - in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1 - ), - ) - self.conv2 = ConvBnRelu( - in_channels=1, out_channels=1, kernel_size=5, stride=1, padding=2 - ) - self.conv1 = ConvBnRelu( - in_channels=1, out_channels=1, kernel_size=7, stride=1, padding=3 - ) - - def forward(self, x): - h, w = x.size(2), x.size(3) - b1 = self.branch1(x) - upscale_parameters = dict( - mode=self.upscale_mode, align_corners=self.align_corners - ) - b1 = F.interpolate(b1, size=(h, w), **upscale_parameters) - - mid = self.mid(x) - x1 = self.down1(x) - x2 = self.down2(x1) - x3 = self.down3(x2) - x3 = F.interpolate(x3, size=(h // 4, w // 4), **upscale_parameters) - - x2 = self.conv2(x2) - x = x2 + x3 - x = F.interpolate(x, size=(h // 2, w // 2), **upscale_parameters) - - x1 = self.conv1(x1) - x = x + x1 - x = F.interpolate(x, size=(h, w), **upscale_parameters) - - x = torch.mul(x, mid) - x = x + b1 - return x - - -class GAUBlock(nn.Module): - def __init__( - self, in_channels: int, out_channels: int, upscale_mode: str = "bilinear" - ): - super(GAUBlock, self).__init__() - - self.upscale_mode = upscale_mode - self.align_corners = True if upscale_mode == "bilinear" else None - - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - ConvBnRelu( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=1, - add_relu=False, - ), - nn.Sigmoid(), - ) - self.conv2 = ConvBnRelu( - in_channels=in_channels, out_channels=out_channels, kernel_size=3, padding=1 - ) - - def forward(self, x, y): - """ - Args: - x: low level feature - y: high level feature - """ - h, w = x.size(2), x.size(3) - y_up = F.interpolate( - y, size=(h, w), mode=self.upscale_mode, align_corners=self.align_corners - ) - x = self.conv2(x) - y = self.conv1(y) - z = torch.mul(x, y) - return y_up + z - - -class PANDecoder(nn.Module): - def __init__( - self, encoder_channels, decoder_channels, upscale_mode: str = "bilinear" - ): - super().__init__() - - self.fpa = FPABlock( - in_channels=encoder_channels[-1], out_channels=decoder_channels - ) - self.gau3 = GAUBlock( - in_channels=encoder_channels[-2], - out_channels=decoder_channels, - upscale_mode=upscale_mode, - ) - self.gau2 = GAUBlock( - in_channels=encoder_channels[-3], - out_channels=decoder_channels, - upscale_mode=upscale_mode, - ) - self.gau1 = GAUBlock( - in_channels=encoder_channels[-4], - out_channels=decoder_channels, - upscale_mode=upscale_mode, - ) - - def forward(self, *features): - bottleneck = features[-1] - x5 = self.fpa(bottleneck) # 1/32 - x4 = self.gau3(features[-2], x5) # 1/16 - x3 = self.gau2(features[-3], x4) # 1/8 - x2 = self.gau1(features[-4], x3) # 1/4 - - return x2 diff --git a/spaces/glrh11/object-detection/app.py b/spaces/glrh11/object-detection/app.py deleted file mode 100644 index 406ceee5c65b5eb06dfb31ce18e16741b1e3b450..0000000000000000000000000000000000000000 --- a/spaces/glrh11/object-detection/app.py +++ /dev/null @@ -1,84 +0,0 @@ -import torch - -from transformers import AutoImageProcessor, AutoModelForObjectDetection -#from transformers import pipeline - -from PIL import Image -import matplotlib.pyplot as plt -import matplotlib.patches as patches - -import io -from random import choice - -image_processor_tiny = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny") -model_tiny = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-tiny") - -import gradio as gr - -COLORS = ["#ff7f7f", "#ff7fbf", "#ff7fff", "#bf7fff", - "#7f7fff", "#7fbfff", "#7fffff", "#7fffbf", - "#7fff7f", "#bfff7f", "#ffff7f", "#ffbf7f"] - -fdic = { - "family" : "DejaVu Serif", - "style" : "normal", - "size" : 18, - "color" : "yellow", - "weight" : "bold" -} - - -def get_figure(in_pil_img, in_results): - plt.figure(figsize=(16, 10)) - plt.imshow(in_pil_img) - ax = plt.gca() - - for score, label, box in zip(in_results["scores"], in_results["labels"], in_results["boxes"]): - selected_color = choice(COLORS) - - box_int = [i.item() for i in torch.round(box).to(torch.int32)] - x, y, w, h = box_int[0], box_int[1], box_int[2]-box_int[0], box_int[3]-box_int[1] - #x, y, w, h = torch.round(box[0]).item(), torch.round(box[1]).item(), torch.round(box[2]-box[0]).item(), torch.round(box[3]-box[1]).item() - - ax.add_patch(plt.Rectangle((x, y), w, h, fill=False, color=selected_color, linewidth=3, alpha=0.8)) - ax.text(x, y, f"{model_tiny.config.id2label[label.item()]}: {round(score.item()*100, 2)}%", fontdict=fdic, alpha=0.8) - - plt.axis("off") - - return plt.gcf() - - -def infer(in_pil_img, in_threshold=0.9): - target_sizes = torch.tensor([in_pil_img.size[::-1]]) - - inputs = image_processor_tiny(images=in_pil_img, return_tensors="pt") - outputs = model_tiny(**inputs) - - # convert outputs (bounding boxes and class logits) to COCO API - results = image_processor_tiny.post_process_object_detection(outputs, threshold=in_threshold, target_sizes=target_sizes)[0] - - figure = get_figure(in_pil_img, results) - - buf = io.BytesIO() - figure.savefig(buf, bbox_inches='tight') - buf.seek(0) - output_pil_img = Image.open(buf) - - return output_pil_img - - -with gr.Blocks(title="Object Detection") as demo: - - with gr.Row(): - input_image = gr.Image(label="Input image", type="pil") - output_image = gr.Image(label="Output image with predicted instances", type="pil") - - gr.Examples(['samples/1.jpeg', 'samples/2.JPG'], inputs=input_image) - - threshold = gr.Slider(0, 1.0, value=0.9, label='threshold') - - send_btn = gr.Button("Infer") - send_btn.click(fn=infer, inputs=[input_image, threshold], outputs=[output_image]) - -#demo.queue() -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Hentai Key Collection (All Final Games).md b/spaces/gotiQspiryo/whisper-ui/examples/Hentai Key Collection (All Final Games).md deleted file mode 100644 index 447e7b8cf35c6ef08b84403e3315573f3b911fd2..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Hentai Key Collection (All Final Games).md +++ /dev/null @@ -1,17 +0,0 @@ - -

            Hentai game is not in english?
            All hentai games are japanese but you can try a howto play a hentai game in english. (Download all my how to's) (ITH HowTo) (VNR HowTo)

            The hentai game needs a NoDVD Patch?
            Download the NoDVD Pack with it you can Patch many games plus DRM to.


            -

            Hentai Key Collection (All final games)


            Download ✒ ✒ ✒ https://urlgoal.com/2uyLIM



            -

            Hentai Key is a hentai porn platform that contains over 50 hentai websites of all kinds. You can check out any one of them, or you can check out Hentai Key since it has so much amazing content on it too! Here is what you can expect to find on this premium hentai website: full-length uncensored hentai movies with all the episodes included, hentai comics and all their releases, hentai games, hentai pictures, doujin, and so much more. Check it all out and enjoy yourself as a fan of hentai today!

            -

            When anyone talks about the content that you can find on Hentai Key, they just must mention how diverse all the stuff you will find here is. Not only are you going to get plenty of hentai porn in the form of videos, but you will also find plenty of pictures, comics, games, and so much more. It is like you have come to hentai heaven when you first stumble upon this website.

            Starting off with the movies, you can rest assured that you will get all the chapters of every hentai porn movie that is featured here. Not only that, but all the videos are full-length, and it is completely uncensored. Many people do not even think that uncensored hentai exists, and yet here it is, ready and waiting for you to watch it on Hentai Key. The movies section is one of the best ones to check out on this website with impeccable quality. The only problem is that the videos sometimes load slowly when you skip around them to reach a certain point in the action.

            Other than movies, you will also find plenty of hentai comics. These are great, and with each one you get all the chapters to read through. These are always in full color and they are just top-notch in terms of both quality but also storytelling since it is not just constant sex without any substance. This is great stuff.

            If you are looking for a more laid-back approach to comics though, you can always check out the countless doujin comics which are posted here. These have a remarkably high chance of being in black and white and are usually worse in quality, but they also feature a lot more sex so you can get to the action while you are jerking off much more quickly.

            After that you have got the games, which usually use Flash Player for them to work. Sadly, Flash has been going out of fashion in recent times, so it can be hard to play all these flash games offered here. Not to mention that some of these games will not work even if you have Flash installed, so there is that.

            Finally, you have got the pictures, and these are great for if you have a craving for still frames of amazing hentai artwork. Some people just need a single frame to masturbate and they get it here. All this content from these various categories gets updated quite often and there is so much of it to explore and enjoy on this premium hentai website.

            All of this will cost you somewhere around $30 every month, but you can also buy the 6-month membership for $90 which is a bargain since you are basically getting 3 months for free in that case. You also have the tri-monthly version of the membership, and even a free 7-day trial to start you off with!

            -

            When all is said and done, Hentai Key really does have the key to your happiness if you are a fan of animated Japanese-style porn, also known as hentai. With so many full-length uncensored movies to explore, as well as a bunch of other stuff like games, comics, doujin, pictures, and other sites, you will be exploring the endless possibilities with Hentai Key for an awfully long time. All of this will run you a standard premium monthly price at $30 or cheaper with longer memberships.

            -

            Aside from the erotic anime on storage (also called hentai), bin Laden also had a folder full of emulator (software that lets you play old console games on your PC) cover art from old games. The strange thing was, quite a few of them were the non-PG13 kind, awash with screenshots of nude women, as well as pixel art of semi nude female characters. There are just no words anymore.

            -

            About Waifu Sex Simulator:
            Dive into the world of anime and hentai, choose your favorite character from a pool of more than 1,000 models coming from the most famous animes, games, and tv shows. With more than 400 independent animations you can live the most complete sex experience, ranging from the most usual sex positions to the most extreme and rough ones. Use your controllers or even your own bare hands thanks to Leap Motion ( ) to interact with the models.

            -

            -

            Enter a crazy universe where manga girls have gone wild for sex! Create your own harem of the horniest hentai maidens and defeat opponents in thrilling sexual contests. In Harem Heroes, you'll enjoy a real RPG with tons of uncensored hentai content. Explore a mirror universe of video-game girls, recruit them to your team, grow your harem, and build up your hero to defeat other players in strategic harem battles! Can you create the mightiest harem of this oversexed world? Find out in Harem Heroes!

            Harem Heroes was included as one of the top 10 best games with animated sex scenes.

            -

            An original story embedded with manga, comic books, video games, and pop culture! Discover the best porn hentai adventure through the visual novel format with plenty of humor and hilarious references. Enjoy hundreds of uncensored hentai illustrations, accessible to all players for free.

            -

            Way back in 2001, I started collecting doujinshi (mostly Final Fantasy VII and KOF, with some Samurai Shodown and Tenchi Muyo! thrown in for good measure). It didn't matter that I couldn't read Japanese; if the cover looked good, I had to have it (regardless of whether or not the content on the inside looked like it was drawn by a fourth-grader). From there, things spiraled out of control, with books based on Guilty Gear, DBZ, Cowboy Bebop, Evangelion, Ghost in the Shell, Escaflowne and pretty much anything SNK-related added to the mix over the years, all in the quest to find that one doujinshi that best reflected my fan pairings (unlikely as it may be). By 2004, I had even included Rurouni Kenshin, The Legend of Zelda and series I had no interest in (like Final Fantasy VI, Puyo Puyo and Azumanga Daioh... before I even knew what Azumanga Daioh was). I kept my focus solely on normal doujinshi (because back then, you couldn't swing a dead cat around without hitting a hentai or yaoi doujinshi*), targeting my favorite pairings from the series mentioned above (such as Cloud x Tifa, Vincent x Yuffie, Galford x Nakoruru, Miroku x Sango, Sol x Baiken & Ky x Dizzy; even odd pairings like Kenshin x Megumi, Shingo x Kula, Miroku x Kikyo and Sol x Dizzy caught my eye). By the summer of that same year, I knew I had to stop (and not because of the over-200 doujinshi I had accumulated by that point in time). So, over the next two years, I gradually sold off my doujinshi. As of September 2006, I had sold off nearly all of my doujinshi and kicked my habit (or so I thought; I purchased a Benimaru x Kula book in March 2007 before finally calling it quits).

            -

            This article introduces Japanese dating-simulation (dating-sim) games and examines their depictions of young men, women, and romantic relationships in "virtual Japan." Several intersecting realms of Japanese popular culture are examined--video games, anime [TEXT NOT REPRODUCIBLE IN ASCII], manga [TEXT NOT REPRODUCIBLE IN ASCII], and hentai [TEXT NOT REPRODUCIBLE IN ASCII] (pornography)--in order to classify dating-sim games. Dating-sim games are also placed in a social context in contemporary Japan by analyzing common attitudes toward otaku [TEXT NOT REPRODUCIBLE IN ASCII] (geek) culture. Otaku are increasingly labeled as shojo [TEXT NOT REPRODUCIBLE IN ASCII] (lit., young girls), essentially a feminization; but dating-sim games offer players a clear identity as non-shojo. This article also demonstrates how dating-sim games are strongly enmeshed within Japanese culture, one major factor that has hindered their popularity abroad.

            -

            It's been over 3 years since the Game of Thrones closing curtain saw HBO's flagship exhibit bow out with a whimper sort o than a eruption, but the world of Westeros finally sex games characters returns stylish 2022.

            -

            Namco new games hentai did an amazing job of keeping the arcade - style graphics intact on this place conversion. The nontextual matter wish stun you with their fluidity and speed up, from the possible action cinemas to the movielike character endings. Control

            aaccfb2cb3
            -
            -
            \ No newline at end of file diff --git a/spaces/gptjx/02/presets.py b/spaces/gptjx/02/presets.py deleted file mode 100644 index 509847a067d04ca97e24fb3cdd5dd629a37cc26e..0000000000000000000000000000000000000000 --- a/spaces/gptjx/02/presets.py +++ /dev/null @@ -1,87 +0,0 @@ -# -*- coding:utf-8 -*- - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 - -max_token_streaming = 3500 # 流式对话时的最大 token 数 -timeout_streaming = 5 # 流式对话时的超时时间 -max_token_all = 3500 # 非流式对话时的最大 token 数 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

            ChatGPT镜像专业版 🚀

            """ -description = """\ -
            - -由GPT镜像 [GPTJX.COM](https://www.gptjx.com) 和 [ChatGPT账号](https://qudao.mfys666.com)赞助 - -专属智能AI助手 [ChatGPT镜像](https://vip.gptjx.com) - 专业版 - -此AI使用 `OpenAI API` 大语言模型 -
            -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in 中文""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in 中文 -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Answer in the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch. -If the context isn't useful, return the original answer. -""" diff --git a/spaces/gradio/HuBERT/examples/speech_to_text/seg_mustc_data.py b/spaces/gradio/HuBERT/examples/speech_to_text/seg_mustc_data.py deleted file mode 100644 index 1ee665d6399729afe17d790d872eff34de124900..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_to_text/seg_mustc_data.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import soundfile as sf -from examples.speech_to_text.prep_mustc_data import ( - MUSTC -) - -from tqdm import tqdm - -log = logging.getLogger(__name__) - - -def main(args): - root = Path(args.data_root).absolute() - lang = args.lang - split = args.split - - cur_root = root / f"en-{lang}" - assert cur_root.is_dir(), ( - f"{cur_root.as_posix()} does not exist. Skipped." - ) - - dataset = MUSTC(root.as_posix(), lang, split) - output = Path(args.output).absolute() - output.mkdir(exist_ok=True) - f_text = open(output / f"{split}.{lang}", "w") - f_wav_list = open(output / f"{split}.wav_list", "w") - for waveform, sample_rate, _, text, _, utt_id in tqdm(dataset): - sf.write( - output / f"{utt_id}.wav", - waveform.squeeze(0).numpy(), - samplerate=int(sample_rate) - ) - f_text.write(text + "\n") - f_wav_list.write(str(output / f"{utt_id}.wav") + "\n") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--data-root", "-d", required=True, type=str) - parser.add_argument("--task", required=True, type=str, choices=["asr", "st"]) - parser.add_argument("--lang", required=True, type=str) - parser.add_argument("--output", required=True, type=str) - parser.add_argument("--split", required=True, choices=MUSTC.SPLITS) - args = parser.parse_args() - - main(args) diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/ChatLoader.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/ChatLoader.tsx deleted file mode 100644 index e666d5759f502ebb041c2ebc5548a045df4c796a..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/ChatLoader.tsx +++ /dev/null @@ -1,20 +0,0 @@ -import { IconRobot } from '@tabler/icons-react'; -import { FC } from 'react'; - -interface Props { } - -export const ChatLoader: FC = () => { - return ( -
            -
            -
            - -
            - -
            -
            - ); -}; diff --git a/spaces/h2oai/wave-tour/examples/plot_interval_stacked_grouped.py b/spaces/h2oai/wave-tour/examples/plot_interval_stacked_grouped.py deleted file mode 100644 index 9084f3797965a4521c4d14716d3a363b1a204030..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/plot_interval_stacked_grouped.py +++ /dev/null @@ -1,38 +0,0 @@ -# Plot / Interval / Stacked / Grouped -# Make a column #plot with both #stacked and grouped bars. #interval -# --- -from h2o_wave import site, data, ui - -page = site['/demo'] - -page.add('example', ui.plot_card( - box='1 1 4 5', - title='Intervals, stacked and dodged', - data=data('day type time gender', 12, rows=[ - ('Mon.','series1', 2800, 'male'), - ('Mon.','series1', 1800, 'female'), - ('Mon.','series2', 2260, 'female'), - ('Mon.','series2', 710, 'male'), - ('Tues.','series1', 1800, 'male'), - ('Tues.','series1', 290, 'female'), - ('Tues.','series2', 1300, 'female'), - ('Tues.','series2', 960, 'male'), - ('Wed.','series1', 950, 'male'), - ('Wed.','series1', 2730, 'female'), - ('Wed.','series2', 1390, 'male'), - ('Wed.','series2', 900, 'female'), - ]), - plot=ui.plot([ - ui.mark( - type='interval', - x='=day', - y='=time', - color='=type', - stack='auto', - dodge='=gender', - y_min=0 - ) - ]) -)) - -page.save() diff --git a/spaces/h2oai/wave-tour/examples/table_select_multiple.py b/spaces/h2oai/wave-tour/examples/table_select_multiple.py deleted file mode 100644 index e804318bdd9fb522323f416cdf55663b33a1dc7d..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/table_select_multiple.py +++ /dev/null @@ -1,32 +0,0 @@ -# Table / Preselection / Multiple -# Use a #table as an advanced multi-select. To allow multiple #selection, -# specify the pre-selected row names in 'values' or simply specify the `multiple=True`. -# #table #selection -# --- -from h2o_wave import main, app, Q, ui - -@app('/demo') -async def serve(q: Q): - if q.args.show_inputs: - q.page['example'].items = [ - ui.text(f'selected={q.args.table}'), - ui.button(name='show_form', label='Back', primary=True), - ] - else: - q.page['example'] = ui.form_card(box='1 1 -1 5', items=[ - ui.table( - name='table', - columns=[ui.table_column(name='text', label='Table multiple selection', min_width='300px')], - rows=[ - ui.table_row(name='row1', cells=['Row 1']), - ui.table_row(name='row2', cells=['Row 2']), - ui.table_row(name='row3', cells=['Row 3']), - ui.table_row(name='row4', cells=['Row 4']), - ui.table_row(name='row5', cells=['Row 5']) - ], - values=['row2','row4'], - ), - ui.button(name='show_inputs', label='Submit', primary=True) - ]) - await q.page.save() - diff --git a/spaces/h2oai/wave-tour/examples/todo.py b/spaces/h2oai/wave-tour/examples/todo.py deleted file mode 100644 index fbe323dcdecf3e3e012b433e39dab56cd34ac6b8..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/todo.py +++ /dev/null @@ -1,76 +0,0 @@ -# To-do List App -# A simple multi-user To-do list application. -# --- -from h2o_wave import main, app, Q, ui -from typing import List - -_id = 0 - - -# A simple class that represents a to-do item. -class TodoItem: - def __init__(self, label): - global _id - _id += 1 - self.id = f'todo_{_id}' - self.done = False - self.label = label - - -@app('/demo') -async def serve(q: Q): - if q.args.new_todo: # Display an input form. - await new_todo(q) - elif q.args.add_todo: # Add an item. - await add_todo(q) - else: # Show all items. - await show_todos(q) - - -async def show_todos(q: Q): - # Get items for this user. - todos: List[TodoItem] = q.user.todos - - # Create a sample list if we don't have any. - if todos is None: - q.user.todos = todos = [TodoItem('Do this'), TodoItem('Do that'), TodoItem('Do something else')] - - # If the user checked/unchecked an item, update our list. - for todo in todos: - if todo.id in q.args: - todo.done = q.args[todo.id] - - # Create done/not-done checkboxes. - done = [ui.checkbox(name=todo.id, label=todo.label, value=True, trigger=True) for todo in todos if todo.done] - not_done = [ui.checkbox(name=todo.id, label=todo.label, trigger=True) for todo in todos if not todo.done] - - # Display list - q.page['form'] = ui.form_card(box='1 1 4 3', items=[ - ui.text_l('To Do'), - ui.button(name='new_todo', label='Add To Do...', primary=True), - *not_done, - *([ui.separator('Done')] if len(done) else []), - *done, - ]) - await q.page.save() - - -async def add_todo(q: Q): - # Insert a new item - q.user.todos.insert(0, TodoItem(q.args.label or 'Untitled')) - - # Go back to our list. - await show_todos(q) - - -async def new_todo(q: Q): - # Display an input form - q.page['form'] = ui.form_card(box='1 1 4 3', items=[ - ui.text_l('Add To Do'), - ui.textbox(name='label', label='What needs to be done?', multiline=True), - ui.buttons([ - ui.button(name='add_todo', label='Add', primary=True), - ui.button(name='show_todos', label='Back'), - ]), - ]) - await q.page.save() diff --git a/spaces/haakohu/deep_privacy2_face/configs/anonymizers/market1501/person.py b/spaces/haakohu/deep_privacy2_face/configs/anonymizers/market1501/person.py deleted file mode 100644 index 51fa99b21f068ce68f796fd32c85d37d9a22bec1..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/configs/anonymizers/market1501/person.py +++ /dev/null @@ -1,6 +0,0 @@ -from ..FB_cse_mask_face import anonymizer, detector, common - -detector.score_threshold = .1 -detector.face_detector_cfg.confidence_threshold = .5 -detector.cse_cfg.score_thres = 0.3 -anonymizer.generators.face_G_cfg = None \ No newline at end of file diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/data/datasets/__init__.py b/spaces/hamacojr/CAT-Seg/cat_seg/data/datasets/__init__.py deleted file mode 100644 index 90d0d07e352ea3952b34176383d89d02456f76d1..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/data/datasets/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import ( - register_coco_stuff, - register_ade20k_150, - register_ade20k_847, - register_pascal_20, - register_pascal_59, -) diff --git a/spaces/hamacojr/CAT-Seg/open_clip/src/open_clip/hf_configs.py b/spaces/hamacojr/CAT-Seg/open_clip/src/open_clip/hf_configs.py deleted file mode 100644 index e236222bafce0358445ea16953ca0b2d5a84758a..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/open_clip/src/open_clip/hf_configs.py +++ /dev/null @@ -1,45 +0,0 @@ -# HF architecture dict: -arch_dict = { - # https://huggingface.co/docs/transformers/model_doc/roberta#roberta - "roberta": { - "config_names": { - "context_length": "max_position_embeddings", - "vocab_size": "vocab_size", - "width": "hidden_size", - "heads": "num_attention_heads", - "layers": "num_hidden_layers", - "layer_attr": "layer", - "token_embeddings_attr": "embeddings" - }, - "pooler": "mean_pooler", - }, - # https://huggingface.co/docs/transformers/model_doc/xlm-roberta#transformers.XLMRobertaConfig - "xlm-roberta": { - "config_names": { - "context_length": "max_position_embeddings", - "vocab_size": "vocab_size", - "width": "hidden_size", - "heads": "num_attention_heads", - "layers": "num_hidden_layers", - "layer_attr": "layer", - "token_embeddings_attr": "embeddings" - }, - "pooler": "mean_pooler", - }, - # https://huggingface.co/docs/transformers/model_doc/mt5#mt5 - "mt5": { - "config_names": { - # unlimited seqlen - # https://github.com/google-research/text-to-text-transfer-transformer/issues/273 - # https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/models/t5/modeling_t5.py#L374 - "context_length": "", - "vocab_size": "vocab_size", - "width": "d_model", - "heads": "num_heads", - "layers": "num_layers", - "layer_attr": "block", - "token_embeddings_attr": "embed_tokens" - }, - "pooler": "mean_pooler", - }, -} diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/backbone/fpn.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/backbone/fpn.py deleted file mode 100644 index e8322db06bbaf471662427663c5a45ca3d99d305..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/backbone/fpn.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch -import torch.nn.functional as F -from torch import nn - -class FPN(nn.Module): - """ - Module that adds FPN on top of a list of feature maps. - The feature maps are currently supposed to be in increasing depth - order, and must be consecutive - """ - - def __init__( - self, in_channels_list, out_channels, conv_block, top_blocks=None, drop_block=None, use_spp=False, use_pan=False, - return_swint_feature_before_fusion=False - ): - """ - Arguments: - in_channels_list (list[int]): number of channels for each feature map that - will be fed - out_channels (int): number of channels of the FPN representation - top_blocks (nn.Module or None): if provided, an extra operation will - be performed on the output of the last (smallest resolution) - FPN output, and the result will extend the result list - """ - super(FPN, self).__init__() - self.inner_blocks = [] - self.layer_blocks = [] - self.pan_blocks = [] if use_pan else None - self.spp_block = SPPLayer() if use_spp else None - self.return_swint_feature_before_fusion = return_swint_feature_before_fusion - for idx, in_channels in enumerate(in_channels_list, 1): - inner_block = "fpn_inner{}".format(idx) - layer_block = "fpn_layer{}".format(idx) - - if in_channels == 0: - continue - if idx==len(in_channels_list) and use_spp: - in_channels = in_channels*4 - inner_block_module = conv_block(in_channels, out_channels, 1) - layer_block_module = conv_block(out_channels, out_channels, 3, 1) - self.add_module(inner_block, inner_block_module) - self.add_module(layer_block, layer_block_module) - self.inner_blocks.append(inner_block) - self.layer_blocks.append(layer_block) - - if use_pan: - pan_in_block = "pan_in_layer{}".format(idx) - pan_in_block_module = conv_block(out_channels, out_channels, 3, 2) - self.add_module(pan_in_block, pan_in_block_module) - pan_out_block = "pan_out_layer{}".format(idx) - pan_out_block_module = conv_block(out_channels, out_channels, 3, 1) - self.add_module(pan_out_block, pan_out_block_module) - self.pan_blocks.append([pan_in_block, pan_out_block]) - - self.top_blocks = top_blocks - self.drop_block = drop_block - - def forward(self, x): - """ - Arguments: - x (list[Tensor]): feature maps for each feature level. - Returns: - results (tuple[Tensor]): feature maps after FPN layers. - They are ordered from highest resolution first. - """ - if type(x) is tuple: - # for the case of VL backbone - x, x_text = x[0], x[1] - # print([v.shape for v in x]) - swint_feature_c4 = None - if self.return_swint_feature_before_fusion: - # TODO: here we only return last single scale feature map before the backbone fusion, should be more flexible - swint_feature_c4 = x[-2] - - if self.spp_block: - last_inner = getattr(self, self.inner_blocks[-1])(self.spp_block(x[-1])) - else: - last_inner = getattr(self, self.inner_blocks[-1])(x[-1]) - results = [] - results.append(getattr(self, self.layer_blocks[-1])(last_inner)) - for feature, inner_block, layer_block in zip( - x[:-1][::-1], self.inner_blocks[:-1][::-1], self.layer_blocks[:-1][::-1] - ): - if not inner_block: - continue - inner_lateral = getattr(self, inner_block)(feature) - - if inner_lateral.shape[-2:] != last_inner.shape[-2:]: - # TODO: could also give size instead of - inner_top_down = F.interpolate(last_inner, size=inner_lateral.shape[-2:], mode="nearest") - else: - inner_top_down = last_inner - - # TODO use size instead of scale to make it robust to different sizes - # inner_top_down = F.upsample(last_inner, size=inner_lateral.shape[-2:], - # mode='bilinear', align_corners=False) - last_inner = inner_lateral + inner_top_down - if self.drop_block and self.training: - results.insert(0, getattr(self, layer_block)(self.drop_block(last_inner))) - else: - results.insert(0, getattr(self, layer_block)(last_inner)) - - if self.pan_blocks: - pan_results = [] - last_outer = results[0] - pan_results.append(last_outer) - for outer_top_down, pan_block in zip(results[1:], self.pan_blocks): - - if self.drop_block and self.training: - pan_lateral = getattr(self, pan_block[0])(self.drop_block(last_outer)) - else: - pan_lateral = getattr(self, pan_block[0])(last_outer) - - last_outer = getattr(self, pan_block[1])(pan_lateral + outer_top_down) - pan_results.append(last_outer) - results = pan_results - - if isinstance(self.top_blocks, LastLevelP6P7): - last_results = self.top_blocks(x[-1], results[-1]) - results.extend(last_results) - elif isinstance(self.top_blocks, LastLevelMaxPool): - last_results = self.top_blocks(results[-1]) - results.extend(last_results) - - try: - return tuple(results), x_text, swint_feature_c4 - except NameError as e: - return tuple(results) - - -class LastLevelMaxPool(nn.Module): - def forward(self, x): - return [F.max_pool2d(x, 1, 2, 0)] - - -class LastLevelP6P7(nn.Module): - """ - This module is used in RetinaNet to generate extra layers, P6 and P7. - """ - def __init__(self, in_channels, out_channels): - super(LastLevelP6P7, self).__init__() - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - nn.init.kaiming_uniform_(module.weight, a=1) - nn.init.constant_(module.bias, 0) - self.use_P5 = in_channels == out_channels - - def forward(self, c5, p5): - x = p5 if self.use_P5 else c5 - p6 = self.p6(x) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - - -class SPPLayer(nn.Module): - def __init__(self): - super(SPPLayer, self).__init__() - - def forward(self, x): - x_1 = x - x_2 = F.max_pool2d(x, 5, stride=1, padding=2) - x_3 = F.max_pool2d(x, 9, stride=1, padding=4) - x_4 = F.max_pool2d(x, 13, stride=1, padding=6) - out = torch.cat((x_1, x_2, x_3, x_4),dim=1) - return out \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/README.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/README.md deleted file mode 100644 index f560384045ab4f6bc2beabef1170308fca117eb3..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/README.md +++ /dev/null @@ -1,9 +0,0 @@ -## Unit Tests - -To run the unittests, do: -``` -cd detectron2 -python -m unittest discover -v -s ./tests -``` - -There are also end-to-end inference & training tests, in [dev/run_*_tests.sh](../dev). diff --git a/spaces/heath1989/prompt-r-gen-sd/promptsModules/sd_command_gen.py b/spaces/heath1989/prompt-r-gen-sd/promptsModules/sd_command_gen.py deleted file mode 100644 index 3ee25acce13855e83790c2ddd007308b5698372b..0000000000000000000000000000000000000000 --- a/spaces/heath1989/prompt-r-gen-sd/promptsModules/sd_command_gen.py +++ /dev/null @@ -1,188 +0,0 @@ -# -*- coding:utf-8 -*- - -import argparse -from enum import IntEnum - -from promptsModules.configDB import (store_data_in_database, retrieve_data_from_database, list_alias, - delete_data_from_database) -from promptsModules.promptGen import (gen_prompt) - -output_file_name = "prompts.txt" - -""" -// lora -CHARACTER_ST_LOUIS = 201, POSE_SIT_CROSSLEG = 626, FUNC_DETAIL_TWEAKER = 901, FUNC_AHEGAO = 903, FUNC_ADD_CUMBERSOME = 904, -BODY_PERFECT_FULL_ROUND_BREASTS_SLIM_WAIST = 503, -// lyco -BACKGROUND_HALATION = 402, STYLE_BEAUTYLEGS = 601, STYLE_ABSTRACT_DREAMWAVE = 202 -""" - -""" -// adetailer prompt: -, 1girl, smile, cute, 18yo, extremely detailed eyes and face, beautiful face, -, 1girl, (smile), cute, 18yo, -""" - -project_config = { - # "preset": 2, - "lora": [101], # x - "lyco": [], # y - "embeddings": [], # z - "models_order": 'xyz', # lora, lyco, embeddings 输出顺序xyz - "lora_weights_random": True, - "additional_prompt": "", - - # 视角动作 - "angle": "null", # null则禁用 - "body_framing": "null", - "dynamic_mode": False, - "pose_type": 1, # base = 1 whole = 2 - - # 颜色,只用给颜色即可 - "leg_wear_color": "", - "shoes_color": "", - "hair_color": "null", - "enable_eye_color": True, - "disable_all_color": True, - - # 身体穿着 - "breasts_size": "large", # null则禁用 - # DRESS = 1 UNIFORM = 2 BODYSUIT = 3 TRADITIONAL = 4 CUSTOM = 5 RANDOM = 6 ASNULL = 7 - "body_wear": 7, - # "top_wear": TopWearType.SHIRTS, - # "bottom_wear": BottomWearType.SKIRT, - # socks = 1; knee_highs = 2; over_knee_highs = 3; thigh_highs = 4; pantyhose = 5; bare = 6; as_null = 7; random = 8 - "leg_wear": 7, - "panties": False, - # BOOTS, HIGHHEELS, SANDALS, SLIPPERS, BARE, ASNULL = 1, 2, 3, 4, 5, 6 - "shoes_type": 6, - - # 直接指定prompt,这会直接跳过其他配置,并且自动加深prompt权重 - "assign_focus_on": "null", # null则禁用 - "assign_pose": "null", # null则禁用 - "assign_profession": "null", - "assign_expression": "", - "assign_shoes": "", - "assign_leg_wear": "", - "assign_body_clothes": "", - "assign_panties": "", - "assign_girl_description": "", - "place": "null", - - # 身体相关 - "body_with": False, - "body_status": False, - "body_description": False, - "cloth_trim": False, - "add_focus": False, - - "nsfw_type": 3, # 1 nude 2 sexual 3 normal - - "accessories_random_tims": 0, # max:6 NOTE:对于某些model,如何这些prompt出现,可能会影响视角效果 - "suffix_words_random_times": 0, # 形容词缀随机次数 - "object_random_times": 0, # max: 6 - "sexual_list_random_index_times": 0, # max:5 - "nude_list_random_index_times": 0, # max:9 - "is_simple_nude": True, - - # 人物描述 - "has_girl_desc": False, # 是否加入超长的girl描述,目前看来大部分不需要 - "add_girl_beautyful": False, # girl前缀描述 - "add_hair_style": False, # 是否加入发型描述 - - # 其他配置 - "is_realistic": False, - "use_starting": True, # 是否使用咒语起手式 - "add_colors": False, - "enable_day_weather": False, # 是否启用天气 - "enable_light_effect": True, # 灯光效果 - "enable_image_tech": False, # 图像技术 - -} - - -def open_file(file_name): - return open(file_name, 'w') - - -def create_prompts(prompt_count): - prompts = "" - config_ = {} - f = open_file(output_file_name) - - for i in range(prompt_count): - prompt_tmp, config = gen_prompt(project_config) - prompts = prompts + prompt_tmp + "\n" - config_ = config - - target = str(config_) + "\n\n\n" + prompts - f.write(target) - return config_ - - -def convert_enum_to_int(data): - # 遍历字典中的键值对 - for key, value in data.items(): - # 检查值是否为IntEnum类型 - if isinstance(value, IntEnum): - # 将IntEnum值转换为整数 - data[key] = value.value - - return data - - -def main(): - # 创建参数解析器 - parser = argparse.ArgumentParser() - - # 添加命令行参数 - parser.add_argument("--m", default='1', help=""" - 运行模式: - m = 1 表示简单模式,只生成prompt,不保存该次配置 - m = 2 表示生成prompt,并保存该次配置 - m = 3 根据alias读取配置,并使用该配置生成prompt - m = 4 列出保存的alias - m = 5 删除保存的alias,根据id删除 - """) - parser.add_argument("--s", default='', required=False, help=""" - 保存配置的alias,仅在m = 2/3时有效 - """) - parser.add_argument("--n", default='4', required=False, help="生成prompt数量,默认为6") - parser.add_argument("--ls", default='100', required=False, help="查询alias数量,默认为100") - - # 解析命令行参数 - args = parser.parse_args() - - # 获取参数值 - arg_mode = args.m - arg_alias = args.s - arg_gen_number = args.n - arg_query_alias_number = args.ls - - if arg_mode == '2': - config_callback = create_prompts(int(arg_gen_number)) - store_data_in_database(config_callback, arg_alias) - elif arg_mode == '3': - query_result = retrieve_data_from_database(arg_alias) - if query_result is None: - print("未找到该配置的alias或id,请检查输入是否正确") - else: - global project_config - project_config = query_result - create_prompts(int(arg_gen_number)) - elif arg_mode == '4': - results = list_alias(int(arg_query_alias_number)) - for i in range(0, len(results), 2): - print(results[i], end=" ") - if i + 1 < len(results): - print(results[i + 1], end=" ") - # 换行打印下一行 - print() - elif arg_mode == '5': - delete_data_from_database(arg_alias) - else: - create_prompts(int(arg_gen_number)) - - -if __name__ == '__main__': - main() diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/DX10 Scenery Fixer V 2 7 [TOP].md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/DX10 Scenery Fixer V 2 7 [TOP].md deleted file mode 100644 index 9232fcb186dda28e5fb1fab49e6a3688e00ff382..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/DX10 Scenery Fixer V 2 7 [TOP].md +++ /dev/null @@ -1,98 +0,0 @@ -## DX10 Scenery Fixer v 2 7 - - - - - - - - - -**Click Here [https://conttooperting.blogspot.com/?l=2txRVZ](https://conttooperting.blogspot.com/?l=2txRVZ)** - - - - - - - - - - - - Here is a possible title and article for the keyword "DX10 Scenery Fixer v 2 7": - -# How to Use DX10 Scenery Fixer v 2 7 to Enhance Your FSX Experience - - - -If you are a fan of Microsoft Flight Simulator X (FSX), you may have noticed that some of the scenery and aircraft textures look blurry or distorted when you enable the DirectX 10 (DX10) mode. This is because FSX was not fully optimized for DX10, and some of the legacy add-ons are not compatible with it. Fortunately, there is a solution: DX10 Scenery Fixer v 2 7 by Stevefx. - - - -DX10 Scenery Fixer v 2 7 is a tool that allows you to display legacy scenery and aircraft with proper textures in DX10 mode. It also fixes some of the common issues that plague FSX in DX10 mode, such as flickering shadows, missing lights, and water effects. DX10 Scenery Fixer v 2 7 is compatible with both FSX and FSX Steam Edition (SE), and it can handle the scenario where both versions are installed on the same computer. - - - -## How to Install DX10 Scenery Fixer v 2 7 - - - -To install DX10 Scenery Fixer v 2 7, you need to follow these steps: - - - -1. Download DX10 Scenery Fixer v 2 7 from [Stevefx's blog](https://stevesfsxanalysis.wordpress.com/). You will need to pay a small fee to access the download link. - -2. Run the installer and follow the instructions. The installer will detect your FSX or FSX SE installation and copy the necessary files. - -3. If you have previously installed any of Stevefx's DX10 patches, the installer will handle them for you. There is no need to uninstall them. - -4. Launch FSX or FSX SE and enable DX10 mode in the graphics settings. - -5. Enjoy your improved scenery and aircraft textures! - - - -## How to Use DX10 Scenery Fixer v 2 7 - - - -To use DX10 Scenery Fixer v 2 7, you need to follow these steps: - - - -1. Launch DX10 Scenery Fixer v 2 7 from your desktop or start menu. You will see a user interface with various tabs and options. - -2. Select the "Fixes" tab and check or uncheck the boxes according to your preferences. You can hover over each option to see a tooltip explaining what it does. - -3. Select the "Controller" tab and adjust the settings for your graphics card and monitor. You can also enable or disable anti-aliasing, anisotropic filtering, and other features. - -4. Select the "Cloud Shadows" tab and enable or disable cloud shadows. You can also adjust the settings for cloud shadow quality, size, intensity, and distance. - -5. Select the "Water" tab and choose your preferred water shader. You can also adjust the settings for water color, reflection, wave animation, and foam. - -6. Select the "Lights" tab and choose your preferred light shader. You can also adjust the settings for light color, intensity, size, bloom, and flare. - -7. Select the "Advanced" tab and tweak some of the advanced options for performance and compatibility. You can also backup or restore your original shaders. - -8. Select the "Help" tab and access the user manual, FAQ, support forum, and other resources. - -9. Click on "Apply Changes" to save your settings and close DX10 Scenery Fixer v 2 7. - -10. Launch FSX or FSX SE and enjoy your enhanced scenery and aircraft textures! - - - -## Conclusion - - - -DX10 Scenery Fixer v 2 7 is a must-have tool for any FSX enthusiast who wants to use DX10 mode without compromising on quality or compatibility. It offers a range of features and options that allow you to customize your FSX experience to your liking. It also fixes some of the common problems that affect FSX in DX10 mode, such as flickering shadows, missing lights, and water effects. DX10 Scenery Fixer v 2 7 is compatible with both FSX and FSX SE, and it can handle the scenario where both versions are installed on the same computer. - - dfd1c89656 - - - - - diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Eugene Hecht Optics Solution Manual Zip !!HOT!!.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Eugene Hecht Optics Solution Manual Zip !!HOT!!.md deleted file mode 100644 index 4621acd090f7c307f9b0f9d4701f782b5ca7194a..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Eugene Hecht Optics Solution Manual Zip !!HOT!!.md +++ /dev/null @@ -1,64 +0,0 @@ -## eugene hecht optics solution manual zip - - - - - - - - - -**Click Here ---> [https://conttooperting.blogspot.com/?l=2txRU9](https://conttooperting.blogspot.com/?l=2txRU9)** - - - - - - - - - - - - - -# Eugene Hecht Optics Solution Manual Zip: A Useful Resource for Students and Instructors - - - -Optics is a branch of physics that studies the nature and behavior of light, as well as its interactions with matter. It covers topics such as reflection, refraction, diffraction, interference, polarization, dispersion, coherence, and more. Optics is essential for understanding many phenomena and applications in science and engineering, such as lasers, fiber optics, telescopes, microscopes, cameras, holography, and optical communication. - - - -One of the most popular textbooks for learning optics is *Optics* by Eugene Hecht, a professor of physics at Adelphi University. The book is now in its fifth edition and has been widely adopted by many universities and colleges around the world. It provides a comprehensive and rigorous introduction to the principles and concepts of optics, with an emphasis on physical intuition and problem-solving skills. The book also includes many examples, exercises, figures, and tables to help students master the material. - - - -However, learning optics can be challenging and requires a lot of practice and feedback. That is why many students and instructors look for a solution manual that contains detailed answers and explanations to the exercises in the book. A solution manual can help students check their understanding, learn from their mistakes, and improve their performance. It can also help instructors prepare lectures, assignments, quizzes, and exams more efficiently and effectively. - - - -Unfortunately, finding a reliable and complete solution manual for *Optics* by Eugene Hecht is not easy. The official solution manual is only available to instructors who adopt the book for their courses, and it is not accessible to students or the general public. Moreover, the official solution manual only covers some of the exercises in the book, not all of them. Therefore, many students and instructors resort to searching online for alternative sources of solutions. - - - -One of the most common keywords that people use to search for a solution manual for *Optics* by Eugene Hecht is "eugene hecht optics solution manual zip". This keyword implies that people are looking for a compressed file that contains the solution manual in PDF format or other formats. However, searching for this keyword may not yield satisfactory results. Many of the links that appear in the search results are either broken, outdated, incomplete, or unreliable. Some of them may even contain viruses or malware that can harm your computer or device. - - - -Therefore, if you are looking for a solution manual for *Optics* by Eugene Hecht, you should be careful and cautious about what you download from the internet. You should always verify the source and quality of the file before opening it or saving it to your device. You should also respect the intellectual property rights of the author and publisher of the book and use the solution manual only for educational purposes. - - - -A better way to find a solution manual for *Optics* by Eugene Hecht is to use a reputable online platform that provides high-quality academic resources for students and instructors. One such platform is StudyWithUs.net[^3^], which offers solution manuals for various textbooks in different fields of study. You can download the solution manual for *Optics*, 5th edition by Hecht (Global Edition) from StudyWithUs.net[^3^] for a reasonable price. The solution manual covers all chapters of the book and provides clear and detailed solutions to all exercises. It also follows the same notation and conventions as the book. The solution manual is in PDF format and can be easily viewed or printed on any device. - - - -If you want to learn optics effectively and efficiently, you should consider getting a solution manual for *Optics* by Eugene Hecht from StudyWithUs.net[^3^]. It will help you enhance your understanding, improve your skills, and achieve your academic goals. - - dfd1c89656 - - - - - diff --git a/spaces/hra/ChatGPT-MindMap/app.py b/spaces/hra/ChatGPT-MindMap/app.py deleted file mode 100644 index e602c7ff4c88a88458333e2e7bb8db65a1013bc7..0000000000000000000000000000000000000000 --- a/spaces/hra/ChatGPT-MindMap/app.py +++ /dev/null @@ -1,121 +0,0 @@ -import json -import requests -import gradio as gr -import random -import time -import os -import datetime -from datetime import datetime -from PIL import Image -from PIL import ImageOps -from PIL import Image, ImageDraw, ImageFont -import json -import io -from PIL import Image -import openai -import pandas as pd -from graphviz import Digraph -import plotly.express as px - -HRA_TOKEN=os.getenv("HRA_TOKEN") - - -headers = {'Content-type': 'application/json', 'Accept': 'text/plain'} -url_hraprompts='https://us-central1-createinsightsproject.cloudfunctions.net/gethrahfprompts' - -data={"prompt_type":'mindmap_prompt',"hra_token":HRA_TOKEN} -try: - r = requests.post(url_hraprompts, data=json.dumps(data), headers=headers) -except requests.exceptions.ReadTimeout as e: - print(e) -print(r.content) - - -prompt_text=str(r.content, 'UTF-8') - -print(prompt_text) - -def getmindmap(topic,openapikey): - print('*******************') - dateforfilesave=datetime.today().strftime("%d-%m-%Y %I:%M%p") - print(topic) - print(dateforfilesave) - - os.environ['OPENAI_API_KEY'] = str(openapikey) - openai.api_key=str(openapikey) - - prompt=prompt_text+topic - resp=openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - max_tokens=4000, - temperature=0 - ) - print(resp) - - df=pd.DataFrame(json.loads(resp['choices'][0]['text'])) - - df['level1']=df['children'].apply(lambda x: x['name']) - df['level1_tmp']=df['children'].apply(lambda x: x['children']) - s = df.pop('level1_tmp').explode().to_frame() - df = pd.merge(df.reset_index(), - s.reset_index(),on='index' - - ) - df['level2']=df['level1_tmp'].apply(lambda x: x['name']) - df['count']=[1]*len(df) - - dot = Digraph() - - dot.graph_attr['rankdir'] = 'LR' - for item in list(set(df['level1'].tolist())): - dot.edge(str(list(set(df["name"].tolist()))[0]), str(item), label='') - for item in list(set(df['level1'].tolist())): - tempdf=df[df['level1']==item] - for stuff in tempdf['level2'].tolist(): - dot.edge(str(item), str(stuff), label='',) - - r=requests.get('https://quickchart.io/graphviz?format=png&graph='+dot.source) - dataBytesIO = io.BytesIO(r.content) - img=Image.open(dataBytesIO) - img.seek(0) - name='temp.png' - img.save(name) - - fig = px.treemap(df, path=['name', 'level1', 'level2'], - values='count', color='level1') - fig.update_layout(margin = dict(t=50, l=25, r=25, b=25)) - fig.show() - fig.write_image('temp1.png') - img1 = Image.open("temp1.png") - - - return img,img1 - -with gr.Blocks() as demo: - gr.Markdown("

            Mind Map Generator

            ") - gr.Markdown( - """Enter a topic and get a quick Mind Map. Use examples as a guide. \n\nNote: ChatGPT (text-davinci-003) is used. The error condition typically occurs with the wrong OpenAI API key or due to ChatGPT's inability to give a structured mindmap""" - ) - with gr.Row() as row: - with gr.Column(): - textbox1 = gr.Textbox(placeholder="Enter topic for Mind Map...", lines=1,label='Your topic (Mandatory)') - with gr.Column(): - textbox2 = gr.Textbox(placeholder="Enter OpenAI API Key...", lines=1,label='Your API Key (Mandatory)') - with gr.Row() as row: - with gr.Column(): - btn = gr.Button("Generate") - with gr.Column(): - examples = gr.Examples(examples=['Avengers','Heavy metal music','Face recognition','Arsenal Football Club'], - inputs=[textbox1]) - with gr.Row() as row: - with gr.Column(): - output_image1 = gr.components.Image(label="Your Mind Map as Graph") - with gr.Column(): - output_image2 = gr.components.Image(label="Your Mind Map as Tree Map") - - - btn.click(getmindmap,inputs=[textbox1,textbox2], outputs=[output_image1,output_image2]) - - -demo.launch() \ No newline at end of file diff --git a/spaces/hsukqilee/NSFW-API/NSFW-API/README.md b/spaces/hsukqilee/NSFW-API/NSFW-API/README.md deleted file mode 100644 index 2e408080f50de252f9fde011eaa697916541a974..0000000000000000000000000000000000000000 --- a/spaces/hsukqilee/NSFW-API/NSFW-API/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: NSFW API -emoji: 😻 -colorFrom: red -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huy-ha/semabs-relevancy/CLIP/setup.py b/spaces/huy-ha/semabs-relevancy/CLIP/setup.py deleted file mode 100644 index 1026ae8a1c4d99f7107cd2eaffb0b391e87a121f..0000000000000000000000000000000000000000 --- a/spaces/huy-ha/semabs-relevancy/CLIP/setup.py +++ /dev/null @@ -1,21 +0,0 @@ -import os - -import pkg_resources -from setuptools import setup, find_packages - -setup( - name="clip", - py_modules=["clip"], - version="1.0", - description="", - author="OpenAI", - packages=find_packages(exclude=["tests*"]), - install_requires=[ - str(r) - for r in pkg_resources.parse_requirements( - open(os.path.join(os.path.dirname(__file__), "requirements.txt")) - ) - ], - include_package_data=True, - extras_require={"dev": ["pytest"]}, -) diff --git a/spaces/hyuan5040/Speech-ChatGPT-Speech/README.md b/spaces/hyuan5040/Speech-ChatGPT-Speech/README.md deleted file mode 100644 index 27aa75e26c35e22ae173cf410150d8c4d2673481..0000000000000000000000000000000000000000 --- a/spaces/hyuan5040/Speech-ChatGPT-Speech/README.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Speech2ChatGPT2Speech -emoji: 🗣️🙉 -colorFrom: indigo -colorTo: yellow -sdk: gradio -python_version: 3.9 -sdk_version: 3.12.0 -app_file: app.py -models: -- neongeckocom/tts-vits-ljspeech-en -- neongeckocom/tts-vits-css10-es -- neongeckocom/tts-vits-css10-fr -- neongeckocom/tts-vits-css10-de -- neongeckocom/tts-vits-cv-it -- neongeckocom/tts-vits-mai-pl -- neongeckocom/tts-vits-mai-uk -- neongeckocom/tts-vits-cv-ro -- neongeckocom/tts-vits-css10-hu -- neongeckocom/tts-vits-cv-el -- neongeckocom/tts-vits-cv-cs -- neongeckocom/tts-vits-cv-sv -- neongeckocom/tts-vits-cv-pt -- neongeckocom/tts-vits-cv-bg -- neongeckocom/tts-vits-cv-hr -- neongeckocom/tts-vits-cv-da -- neongeckocom/tts-vits-cv-sk -- neongeckocom/tts-vits-css10-nl -- neongeckocom/tts-vits-css10-fi -- neongeckocom/tts-vits-cv-lt -- neongeckocom/tts-vits-cv-sl -- neongeckocom/tts-vits-cv-lv -- neongeckocom/tts-vits-cv-et -- neongeckocom/tts-vits-cv-ga -- neongeckocom/tts-vits-cv-mt -pinned: false -license: apache-2.0 -duplicated_from: Yusin/Speech-ChatGPT-Speech ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/docs/speed_benchmark.md b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/docs/speed_benchmark.md deleted file mode 100644 index 055aee0defe2c43a523ced48260242f0f99b7cea..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/docs/speed_benchmark.md +++ /dev/null @@ -1,93 +0,0 @@ -## Test Training Speed - -- Test Commands - -You need to use the following two commands to test the Partial FC training performance. -The number of identites is **3 millions** (synthetic data), turn mixed precision training on, backbone is resnet50, -batch size is 1024. -```shell -# Model Parallel -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions -# Partial FC 0.1 -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions_pfc -``` - -- GPU Memory - -``` -# (Model Parallel) gpustat -i -[0] Tesla V100-SXM2-32GB | 64'C, 94 % | 30338 / 32510 MB -[1] Tesla V100-SXM2-32GB | 60'C, 99 % | 28876 / 32510 MB -[2] Tesla V100-SXM2-32GB | 60'C, 99 % | 28872 / 32510 MB -[3] Tesla V100-SXM2-32GB | 69'C, 99 % | 28872 / 32510 MB -[4] Tesla V100-SXM2-32GB | 66'C, 99 % | 28888 / 32510 MB -[5] Tesla V100-SXM2-32GB | 60'C, 99 % | 28932 / 32510 MB -[6] Tesla V100-SXM2-32GB | 68'C, 100 % | 28916 / 32510 MB -[7] Tesla V100-SXM2-32GB | 65'C, 99 % | 28860 / 32510 MB - -# (Partial FC 0.1) gpustat -i -[0] Tesla V100-SXM2-32GB | 60'C, 95 % | 10488 / 32510 MB │······················· -[1] Tesla V100-SXM2-32GB | 60'C, 97 % | 10344 / 32510 MB │······················· -[2] Tesla V100-SXM2-32GB | 61'C, 95 % | 10340 / 32510 MB │······················· -[3] Tesla V100-SXM2-32GB | 66'C, 95 % | 10340 / 32510 MB │······················· -[4] Tesla V100-SXM2-32GB | 65'C, 94 % | 10356 / 32510 MB │······················· -[5] Tesla V100-SXM2-32GB | 61'C, 95 % | 10400 / 32510 MB │······················· -[6] Tesla V100-SXM2-32GB | 68'C, 96 % | 10384 / 32510 MB │······················· -[7] Tesla V100-SXM2-32GB | 64'C, 95 % | 10328 / 32510 MB │······················· -``` - -- Training Speed - -```python -# (Model Parallel) trainging.log -Training: Speed 2271.33 samples/sec Loss 1.1624 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 2269.94 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 2272.67 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 2266.55 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 2272.54 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 - -# (Partial FC 0.1) trainging.log -Training: Speed 5299.56 samples/sec Loss 1.0965 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 5296.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 5304.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 5274.43 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 5300.10 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 -``` - -In this test case, Partial FC 0.1 only use1 1/3 of the GPU memory of the model parallel, -and the training speed is 2.5 times faster than the model parallel. - - -## Speed Benchmark - -1. Training speed of different parallel methods (samples/second), Tesla V100 32GB * 8. (Larger is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 4681 | 4824 | 5004 | -|250000 | 4047 | 4521 | 4976 | -|500000 | 3087 | 4013 | 4900 | -|1000000 | 2090 | 3449 | 4803 | -|1400000 | 1672 | 3043 | 4738 | -|2000000 | - | 2593 | 4626 | -|4000000 | - | 1748 | 4208 | -|5500000 | - | 1389 | 3975 | -|8000000 | - | - | 3565 | -|16000000 | - | - | 2679 | -|29000000 | - | - | 1855 | - -2. GPU memory cost of different parallel methods (GB per GPU), Tesla V100 32GB * 8. (Smaller is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 7358 | 5306 | 4868 | -|250000 | 9940 | 5826 | 5004 | -|500000 | 14220 | 7114 | 5202 | -|1000000 | 23708 | 9966 | 5620 | -|1400000 | 32252 | 11178 | 6056 | -|2000000 | - | 13978 | 6472 | -|4000000 | - | 23238 | 8284 | -|5500000 | - | 32188 | 9854 | -|8000000 | - | - | 12310 | -|16000000 | - | - | 19950 | -|29000000 | - | - | 32324 | diff --git a/spaces/iamstolas/STOLAS/src/components/button-scroll-to-bottom.tsx b/spaces/iamstolas/STOLAS/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/inamXcontru/PoeticTTS/Avengers Movie Free Download In Hindi 720p Torrentl How to Watch All Series in HD Quality.md b/spaces/inamXcontru/PoeticTTS/Avengers Movie Free Download In Hindi 720p Torrentl How to Watch All Series in HD Quality.md deleted file mode 100644 index 9929a7b183054fbefd558ba29efb1f9e9ff9e844..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Avengers Movie Free Download In Hindi 720p Torrentl How to Watch All Series in HD Quality.md +++ /dev/null @@ -1,5 +0,0 @@ - -

            Most Viewed, Most Favorite, Top Rating, Top IMDb movies online. Here we can download and watch 123movies movies offline. 123Movies website is the best alternative to Scarlet Bond's (2021) free online. We will recommend 123Movies as the best Solarmovie alternative There are a

            -

            Avengers Movie Free Download In Hindi 720p Torrentl


            Download Zip · https://gohhs.com/2uz3eW



            aaccfb2cb3
            -
            -
            \ No newline at end of file diff --git a/spaces/innovatorved/whisper.api/app/tests/__init__.py b/spaces/innovatorved/whisper.api/app/tests/__init__.py deleted file mode 100644 index b9887a5d4ef16e805b8f16806987d69b3f700ab5..0000000000000000000000000000000000000000 --- a/spaces/innovatorved/whisper.api/app/tests/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -import pytest -from fastapi.testclient import TestClient -from sqlalchemy import create_engine -from sqlalchemy.orm import sessionmaker - -from app.core.config import settings -from app.main import app -from app.tests.utils.utils import override_get_db, get_db - -# Create test database -TEST_DATABASE_URL = settings.TEST_DATABASE_URL -engine = create_engine(TEST_DATABASE_URL) -TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) - - -# Define test client -@pytest.fixture(scope="module") -def test_client(): - with TestClient(app) as client: - yield client - - -# Define test database -@pytest.fixture(scope="module") -def test_db(): - db = TestingSessionLocal() - yield db - db.close() - - -# Override get_db function for testing -@pytest.fixture(autouse=True) -def override_get_db(monkeypatch): - monkeypatch.setattr("app.api.dependencies.get_db", get_db) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Modify Exe Files And Crack A Program.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Modify Exe Files And Crack A Program.md deleted file mode 100644 index b747822d5ffa1688d52b32b888db968fbb158901..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Modify Exe Files And Crack A Program.md +++ /dev/null @@ -1,64 +0,0 @@ -

            Modify Exe Files And Crack A Program


            DOWNLOADhttps://urlin.us/2uExHn



            -
            -to do this automatically? - - I know it's a bad idea - - MalwareSami: bad idea to what? to not have passwords. ;) - - MalwareSami: What if you install software as root user? - - MalwareSami: if you're on Ubuntu, you can sudo apt-get install samba - - Just doin a fresh install - - or a command line tool, which will be easier - - I'm a newb - - I've been using Gentoo before - - I know how to do stuff - - MalwareSami: use the live cd to get your data off. - - I just got the iso off the ubuntu website - - What's the link to that? - - the installer.. - - You can mount/access the contents of the iso files with a live cd - - dr_willis: cool, I've never seen that on a desktop - - Let me get into the livecd - - just to check you have good backups of your important stuff is a good idea. - - Yeah, I'm good - - I have an external hard drive - - I know i have a few clones of what i consider my most important stuff. ;) - - I just need to transfer the files over, I don't need to backup - - So I can re-install Ubuntu when it's done? - - MalwareSami: do you have a complete partition on that HDD? - - the installer will be able to access your external hd and make backups - - Yes, I have - - I have the Ubuntu desktop on there - - MalwareSami: so, you want to remove Ubuntu, and just keep the files on the external HDD? - - I do - - then 4fefd39f24
            -
            -
            -

            diff --git a/spaces/ivntl/MMS/asr.py b/spaces/ivntl/MMS/asr.py deleted file mode 100644 index a96c30a579e13c73dd35fbd66b3180b80735388f..0000000000000000000000000000000000000000 --- a/spaces/ivntl/MMS/asr.py +++ /dev/null @@ -1,128 +0,0 @@ -import librosa -from transformers import Wav2Vec2ForCTC, AutoProcessor -import torch -import json - -from huggingface_hub import hf_hub_download -from torchaudio.models.decoder import ctc_decoder - -ASR_SAMPLING_RATE = 16_000 - -ASR_LANGUAGES = {} -with open(f"data/asr/all_langs.tsv") as f: - for line in f: - iso, name = line.split(" ", 1) - ASR_LANGUAGES[iso] = name - -MODEL_ID = "facebook/mms-1b-all" - -processor = AutoProcessor.from_pretrained(MODEL_ID) -model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) - - -lm_decoding_config = {} -lm_decoding_configfile = hf_hub_download( - repo_id="facebook/mms-cclms", - filename="decoding_config.json", - subfolder="mms-1b-all", -) - -with open(lm_decoding_configfile) as f: - lm_decoding_config = json.loads(f.read()) - -# allow language model decoding for "eng" - -decoding_config = lm_decoding_config["eng"] - -lm_file = hf_hub_download( - repo_id="facebook/mms-cclms", - filename=decoding_config["lmfile"].rsplit("/", 1)[1], - subfolder=decoding_config["lmfile"].rsplit("/", 1)[0], -) -token_file = hf_hub_download( - repo_id="facebook/mms-cclms", - filename=decoding_config["tokensfile"].rsplit("/", 1)[1], - subfolder=decoding_config["tokensfile"].rsplit("/", 1)[0], -) -lexicon_file = None -if decoding_config["lexiconfile"] is not None: - lexicon_file = hf_hub_download( - repo_id="facebook/mms-cclms", - filename=decoding_config["lexiconfile"].rsplit("/", 1)[1], - subfolder=decoding_config["lexiconfile"].rsplit("/", 1)[0], - ) - -beam_search_decoder = ctc_decoder( - lexicon=lexicon_file, - tokens=token_file, - lm=lm_file, - nbest=1, - beam_size=500, - beam_size_token=50, - lm_weight=float(decoding_config["lmweight"]), - word_score=float(decoding_config["wordscore"]), - sil_score=float(decoding_config["silweight"]), - blank_token="", -) - -def transcribe( - audio_source=None, microphone=None, file_upload=None, lang="eng (English)" -): - if type(microphone) is dict: - # HACK: microphone variable is a dict when running on examples - microphone = microphone["name"] - audio_fp = ( - file_upload if "upload" in str(audio_source or "").lower() else microphone - ) - - if audio_fp is None: - return "ERROR: You have to either use the microphone or upload an audio file" - - audio_samples = librosa.load(audio_fp, sr=ASR_SAMPLING_RATE, mono=True)[0] - - lang_code = lang.split()[0] - processor.tokenizer.set_target_lang(lang_code) - model.load_adapter(lang_code) - - inputs = processor( - audio_samples, sampling_rate=ASR_SAMPLING_RATE, return_tensors="pt" - ) - - # set device - if torch.cuda.is_available(): - device = torch.device("cuda") - elif ( - hasattr(torch.backends, "mps") - and torch.backends.mps.is_available() - and torch.backends.mps.is_built() - ): - device = torch.device("mps") - else: - device = torch.device("cpu") - - model.to(device) - inputs = inputs.to(device) - - with torch.no_grad(): - outputs = model(**inputs).logits - - if lang_code != "eng": - ids = torch.argmax(outputs, dim=-1)[0] - transcription = processor.decode(ids) - else: - beam_search_result = beam_search_decoder(outputs.to("cpu")) - transcription = " ".join(beam_search_result[0][0].words).strip() - - return transcription - - -ASR_EXAMPLES = [ - [None, "assets/english.mp3", None, "eng (English)"], - # [None, "assets/tamil.mp3", None, "tam (Tamil)"], - # [None, "assets/burmese.mp3", None, "mya (Burmese)"], -] - -ASR_NOTE = """ -The above demo uses beam-search decoding with LM for English and greedy decoding results for all other languages. -Checkout the instructions [here](https://huggingface.co/facebook/mms-1b-all) on how to run LM decoding for other languages. -""" diff --git a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git a/spaces/jasonwu92/image-search-playground/streamlit_app.py b/spaces/jasonwu92/image-search-playground/streamlit_app.py deleted file mode 100644 index 1c5b9d03128647b8c91c9dcbad59eafde9ecbe79..0000000000000000000000000000000000000000 --- a/spaces/jasonwu92/image-search-playground/streamlit_app.py +++ /dev/null @@ -1,39 +0,0 @@ -import streamlit as st -import numpy as np -from PIL import Image - -def process_image(input_image): - # Your image processing function goes here - output_image = input_image.copy() - return output_image - -# Set the title of the web application -st.title('Multiple Input and Output Images Interface') - -# Create a sidebar for image inputs -st.sidebar.title('Input Images') - -# Set up a file uploader in the sidebar for each input image -uploaded_images = [] -num_images = 3 # The number of input images -for i in range(num_images): - uploaded_image = st.sidebar.file_uploader(f'Upload Image {i+1}', type=['png', 'jpg', 'jpeg']) - if uploaded_image is not None: - uploaded_images.append(uploaded_image) - -# Display input images and process them -if uploaded_images: - st.header('Input Images') - input_images = [] - for img in uploaded_images: - input_img = Image.open(img) - input_images.append(input_img) - st.image(input_img, width=200, caption='Uploaded Image') - - # Process input images and display output images - st.header('Output Images') - for input_img in input_images: - output_img = process_image(input_img) - st.image(output_img, width=200, caption='Processed Image') -else: - st.warning('Please upload images in the sidebar.') diff --git a/spaces/jbilcke-hf/observer/src/app/layout.tsx b/spaces/jbilcke-hf/observer/src/app/layout.tsx deleted file mode 100644 index 28151e2ea2a3bbd44e001cc753cc53702f290339..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/observer/src/app/layout.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import './globals.css' -import type { Metadata } from 'next' -import { Inter } from 'next/font/google' - -const inter = Inter({ subsets: ['latin'] }) - -export const metadata: Metadata = { - title: 'Idfx', - description: 'Idfx', -} - -export default function RootLayout({ - children, -}: { - children: React.ReactNode -}) { - return ( - - - {children} - - - ) -} diff --git a/spaces/jhonparra18/ocr-LLM-image-summarizer/app.py b/spaces/jhonparra18/ocr-LLM-image-summarizer/app.py deleted file mode 100644 index 8a1620a53506c73a658879054954c537a53b4ae9..0000000000000000000000000000000000000000 --- a/spaces/jhonparra18/ocr-LLM-image-summarizer/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import streamlit as st -from langchain.callbacks import StreamlitCallbackHandler -import numpy as np -from PIL import Image -from app_utils import TEMP_DIR_NAME,save_uploaded_file,reset_chat -import os -from pathlib import Path -from text_summarizer import agent,processor - -BOT_DEFAULT_MSG="Hello 👋 I'm a test AI OCR assistant to help you with your questions about your receipts or similar images containing text. Also feel free to ask me anything" - -st.set_page_config(page_title="OCR+LLM Image summarizer",layout='wide',page_icon=":shark:") - -#placeholders for temporal image path and an image processor in case we want to read img text separately -IMAGE_TMP_PATH=None -PROCESSOR=processor -img_text="" -inject_text=False - -with st.sidebar: - - st.markdown( - f"

            Invoice|Receipt Summarizer using OpenCV+Tesseract+LLM



            ", - unsafe_allow_html=True - ) - input_image = st.file_uploader(label='OCR+LLM Image summarizer',help="Upload an image",type=['jpg','png','jpeg']) - - if input_image is not None: - save_uploaded_file(input_image) - IMAGE_TMP_PATH=os.path.join(TEMP_DIR_NAME,input_image.name) - st.markdown(f"

            Image Uploaded and saved
            ",unsafe_allow_html=True) - st.image(Image.open(IMAGE_TMP_PATH)) - st.markdown("***") - inject_text=st.checkbox(label="Inject OCR output",value=False,help="Injects text found in the image without using the agent Action (Speeds response)") - if inject_text: - img_text=PROCESSOR.run(IMAGE_TMP_PATH) - - - st.markdown("***") - st.button("Reset Chat History", type="secondary", on_click=reset_chat,use_container_width=True) - _,col_c,_=st.columns(3) - with col_c: - st.markdown("[![Foo](https://img.icons8.com/material-outlined/96/000000/github.png)](https://github.com/statscol/invoice-llm-summarizer)") - - -# Initialize chat history based on streamlit doc for chat applications https://docs.streamlit.io/knowledge-base/tutorials/build-conversational-apps -if "messages" not in st.session_state: - reset_chat() - st.session_state.messages.append({"role": "assistant", "content": BOT_DEFAULT_MSG}) - - -# Display chat messages from history on app rerun -for message in st.session_state.messages: - with st.chat_message(message["role"]): - st.markdown(message["content"]) - -prompt=st.chat_input("Write a message to the AI assistant | Escribe un mensaje para el asistente de IA") - -if prompt: - - st.session_state.messages.append({"role": "user", "content": prompt}) - st.chat_message("user").markdown(prompt) - prompt_ad=f'{prompt}, img path: {IMAGE_TMP_PATH}' if (input_image is not None and not inject_text) else (f'{prompt} text: {img_text}' if inject_text else prompt) - #streamlit callback https://python.langchain.com/docs/integrations/callbacks/streamlit - print(f'PROMPT: {prompt_ad}') - st_callback = StreamlitCallbackHandler(st.container()) - - #hotfix to errors - try: - response = agent.run(prompt_ad,callbacks=[st_callback]) - except ValueError as e: - response = "Sorry i could't understand your last question." - - # Add assistant response to chat history - st.session_state.messages.append({"role": "assistant", "content": response}) - st.chat_message("assistant").markdown(response) diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig2c_Enzyme_promiscuity_all.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig2c_Enzyme_promiscuity_all.py deleted file mode 100644 index 9a516dd1f41d6878235ab91246b157474d82a186..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig2c_Enzyme_promiscuity_all.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -# Author: LE YUAN - -import math -import numpy as np -import matplotlib.pyplot as plt -from matplotlib import rc -from scipy import stats -import seaborn as sns -import pandas as pd -from scipy.stats import ranksums - - -def median(lst): - sortedLst = sorted(lst) - lstLen = len(lst) - index = (lstLen - 1) // 2 - - if (lstLen % 2): - return sortedLst[index] - else: - return (sortedLst[index] + sortedLst[index + 1])/2.0 - -def main() : - - with open('../../Data/enzyme_promiscuity/all_preferred_alternative_random.txt', 'r') as infile : - lines = infile.readlines() - - alldata = dict() - alldata['type'] = list() - alldata['value'] = list() - preferred_substrates = list() - alternative_substrates = list() - random_substrates = list() - for line in lines[1:] : - data = line.strip().split('\t') - order, value, substrate_type = data[0], data[1], data[2] - - if substrate_type == 'Preferred' : - preferred_substrates.append(float(value)) - alldata['type'].append('Preferred') - alldata['value'].append(float(value)) - if substrate_type == 'Alternative' : - alternative_substrates.append(float(value)) - alldata['type'].append('Alternative') - alldata['value'].append(float(value)) - if substrate_type == 'Random' : - random_substrates.append(float(value)) - alldata['type'].append('Random') - alldata['value'].append(float(value)) - - p_value_1 = ranksums(preferred_substrates, alternative_substrates)[1] - p_value_2 = ranksums(preferred_substrates, random_substrates)[1] - p_value_3 = ranksums(alternative_substrates, random_substrates)[1] - print('The amount of preferred_substrates:', len(preferred_substrates)) - print('The amount of alternative_substrates:', len(alternative_substrates)) - print('The amount of random_substrates:', len(random_substrates)) - print('The median value of preferred_substrates: %.4f' % median(preferred_substrates)) - print('The median value of alternative_substrates: %.4f' % median(alternative_substrates)) - print('The median value of random_substrates: %.4f' % median(random_substrates)) - print('The real value of preferred substrates: %.2f' % pow(10, median(preferred_substrates))) - print('The real value of alternative substrates: %.2f' % pow(10, median(alternative_substrates))) - print('The real value of random substrates: %.2f' % pow(10, median(random_substrates))) - print('P value between preferred_substrates and alternative_substrates is: %s' % p_value_1) - print('P value between preferred_substrates and random_substrates is: %s' % p_value_2) - print('P value between alternative_substrates and random_substrates is: %s' % p_value_3) - - # The amount of preferred_substrates: 945 - # The amount of alternative_substrates: 4238 - # The amount of random_substrates: 945 - # The median value of preferred_substrates: 1.0443 - # The median value of alternative_substrates: 0.7787 - # The median value of random_substrates: 0.5448 - # The median value of preferred substrates: 11.07 - # The median value of alternative substrates: 6.01 - # The median value of random substrates: 3.51 - # P value between preferred_substrates and alternative_substrates is: 1.3437080577210658e-12 - # P value between preferred_substrates and random_substrates is: 3.4650135085615217e-19 - # P value between alternative_substrates and random_substrates is: 9.272804271828343e-06 - - # Plot the boxplot figures between the Alternative and Preferred - allData = pd.DataFrame(alldata) - # print(type(allData)) - - plt.figure(figsize=(1.5,1.5)) - - # To solve the 'Helvetica' font cannot be used in PDF file - # https://stackoverflow.com/questions/59845568/the-pdf-backend-does-not-currently-support-the-selected-font - rc('font',**{'family':'serif','serif':['Helvetica']}) - plt.rcParams['pdf.fonttype'] = 42 - - plt.axes([0.12,0.12,0.83,0.83]) - - plt.tick_params(direction='in') - plt.tick_params(which='major',length=1.5) - plt.tick_params(which='major',width=0.4) - plt.tick_params(which='major',width=0.4) - - # rectangular box plot - palette = {"Random": '#FF8C00', "Alternative": '#2166ac', "Preferred": '#b2182b'} - - # for ind in allData.index: - # allData.loc[ind,'entry'] = '${0}$'.format(allData.loc[ind,'entry']) - - ax = sns.boxplot(data=alldata, x="type", y="value", order = ["Random", "Alternative", "Preferred"], - palette=palette, showfliers=False, linewidth=0.5, width=0.5) # boxprops=dict(alpha=1.0) - - ax = sns.stripplot(data=alldata, x="type", y="value", order = ["Random", "Alternative", "Preferred"], - palette=palette, size=0.7) # boxprops=dict(alpha=1.0) - - # https://stackoverflow.com/questions/58476654/how-to-remove-or-hide-x-axis-label-from-seaborn-boxplot - # plt.xlabel(None) will remove the Label, but not the ticks. - ax.set(xlabel=None) - - for patch in ax.artists: - r, g, b, a = patch.get_facecolor() - patch.set_facecolor((r, g, b, 0.3)) - - # print(ax.artists) - # print(ax.lines) - # print(len(ax.lines)) - # https://cduvallet.github.io/posts/2018/03/boxplots-in-python - for i, artist in enumerate(ax.artists): - # print(i) - - if i % 3 == 0: - col = '#FF8C00' - if i % 3 == 1: - col = '#2166ac' - if i % 3 == 2: - col = '#b2182b' - - # This sets the color for the main box - artist.set_edgecolor(col) - - # Each box has 5 associated Line2D objects (to make the whiskers, fliers, etc.) - # Loop over them here, and use the same colour as above - for j in range(i*5,i*5+5): - # print(j) - line = ax.lines[j] - line.set_color(col) - line.set_mfc(col) - line.set_mec(col) - handles = [ax.artists[0], ax.artists[1]] - - # for tick in ax.get_xticklabels() : - # tick.set_rotation(30) - - plt.rcParams['font.family'] = 'Helvetica' - - plt.ylabel("Predicted $k$$_\mathregular{cat}$ value [log10]", fontname='Helvetica', fontsize=7) - - plt.ylim(-4,8) - plt.yticks([-4, -2, 0, 2, 4, 6, 8]) - - plt.xticks(fontsize=7, rotation=30, ha='right') - plt.yticks(fontsize=6) - - ax.spines['bottom'].set_linewidth(0.5) - ax.spines['left'].set_linewidth(0.5) - ax.spines['top'].set_linewidth(0.5) - ax.spines['right'].set_linewidth(0.5) - - plt.savefig("../../Results/figures/Fig2c.pdf", dpi=400, bbox_inches = 'tight') - - -if __name__ == '__main__' : - main() diff --git a/spaces/jjie/DeepDanbooru_string/app.py b/spaces/jjie/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/jjie/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

            " + "
            \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

            " - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

            PNG Info

            -""" - for key, text in items.items(): - info += f""" -
            -

            {plaintext_to_html(str(key))}

            -

            {plaintext_to_html(str(text))}

            -
            -""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

            {message}

            " - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/utils/cleaners.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/utils/cleaners.py deleted file mode 100644 index eab63f05c9cc7cc0b583992eac94058097f3c191..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/utils/cleaners.py +++ /dev/null @@ -1,88 +0,0 @@ -""" -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You"ll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -""" - -import re -from unidecode import unidecode -from .numbers import normalize_numbers - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r"\s+") - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1]) for x in [ - ("mrs", "misess"), - ("mr", "mister"), - ("dr", "doctor"), - ("st", "saint"), - ("co", "company"), - ("jr", "junior"), - ("maj", "major"), - ("gen", "general"), - ("drs", "doctors"), - ("rev", "reverend"), - ("lt", "lieutenant"), - ("hon", "honorable"), - ("sgt", "sergeant"), - ("capt", "captain"), - ("esq", "esquire"), - ("ltd", "limited"), - ("col", "colonel"), - ("ft", "fort"), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - """lowercase input tokens.""" - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, " ", text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - """Basic pipeline that lowercases and collapses whitespace without transliteration.""" - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - """Pipeline for non-English text that transliterates to ASCII.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - """Pipeline for English text, including number and abbreviation expansion.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - return text diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/FtexImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/FtexImagePlugin.py deleted file mode 100644 index c46b2f28ba6c889d643db3bcf8fccade7fdc6d2d..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/FtexImagePlugin.py +++ /dev/null @@ -1,113 +0,0 @@ -""" -A Pillow loader for .ftc and .ftu files (FTEX) -Jerome Leclanche - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ - -Independence War 2: Edge Of Chaos - Texture File Format - 16 October 2001 - -The textures used for 3D objects in Independence War 2: Edge Of Chaos are in a -packed custom format called FTEX. This file format uses file extensions FTC -and FTU. -* FTC files are compressed textures (using standard texture compression). -* FTU files are not compressed. -Texture File Format -The FTC and FTU texture files both use the same format. This -has the following structure: -{header} -{format_directory} -{data} -Where: -{header} = { - u32:magic, - u32:version, - u32:width, - u32:height, - u32:mipmap_count, - u32:format_count -} - -* The "magic" number is "FTEX". -* "width" and "height" are the dimensions of the texture. -* "mipmap_count" is the number of mipmaps in the texture. -* "format_count" is the number of texture formats (different versions of the -same texture) in this file. - -{format_directory} = format_count * { u32:format, u32:where } - -The format value is 0 for DXT1 compressed textures and 1 for 24-bit RGB -uncompressed textures. -The texture data for a format starts at the position "where" in the file. - -Each set of texture data in the file has the following structure: -{data} = format_count * { u32:mipmap_size, mipmap_size * { u8 } } -* "mipmap_size" is the number of bytes in that mip level. For compressed -textures this is the size of the texture data compressed with DXT1. For 24 bit -uncompressed textures, this is 3 * width * height. Following this are the image -bytes for that mipmap level. - -Note: All data is stored in little-Endian (Intel) byte order. -""" - -import struct -from enum import IntEnum -from io import BytesIO - -from . import Image, ImageFile - -MAGIC = b"FTEX" - - -class Format(IntEnum): - DXT1 = 0 - UNCOMPRESSED = 1 - - -class FtexImageFile(ImageFile.ImageFile): - format = "FTEX" - format_description = "Texture File Format (IW2:EOC)" - - def _open(self): - if not _accept(self.fp.read(4)): - msg = "not an FTEX file" - raise SyntaxError(msg) - struct.unpack(" int: - """Perform shell completion for the given CLI program. - - :param cli: Command being called. - :param ctx_args: Extra arguments to pass to - ``cli.make_context``. - :param prog_name: Name of the executable in the shell. - :param complete_var: Name of the environment variable that holds - the completion instruction. - :param instruction: Value of ``complete_var`` with the completion - instruction and shell, in the form ``instruction_shell``. - :return: Status code to exit with. - """ - shell, _, instruction = instruction.partition("_") - comp_cls = get_completion_class(shell) - - if comp_cls is None: - return 1 - - comp = comp_cls(cli, ctx_args, prog_name, complete_var) - - if instruction == "source": - echo(comp.source()) - return 0 - - if instruction == "complete": - echo(comp.complete()) - return 0 - - return 1 - - -class CompletionItem: - """Represents a completion value and metadata about the value. The - default metadata is ``type`` to indicate special shell handling, - and ``help`` if a shell supports showing a help string next to the - value. - - Arbitrary parameters can be passed when creating the object, and - accessed using ``item.attr``. If an attribute wasn't passed, - accessing it returns ``None``. - - :param value: The completion suggestion. - :param type: Tells the shell script to provide special completion - support for the type. Click uses ``"dir"`` and ``"file"``. - :param help: String shown next to the value if supported. - :param kwargs: Arbitrary metadata. The built-in implementations - don't use this, but custom type completions paired with custom - shell support could use it. - """ - - __slots__ = ("value", "type", "help", "_info") - - def __init__( - self, - value: t.Any, - type: str = "plain", - help: t.Optional[str] = None, - **kwargs: t.Any, - ) -> None: - self.value: t.Any = value - self.type: str = type - self.help: t.Optional[str] = help - self._info = kwargs - - def __getattr__(self, name: str) -> t.Any: - return self._info.get(name) - - -# Only Bash >= 4.4 has the nosort option. -_SOURCE_BASH = """\ -%(complete_func)s() { - local IFS=$'\\n' - local response - - response=$(env COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \ -%(complete_var)s=bash_complete $1) - - for completion in $response; do - IFS=',' read type value <<< "$completion" - - if [[ $type == 'dir' ]]; then - COMPREPLY=() - compopt -o dirnames - elif [[ $type == 'file' ]]; then - COMPREPLY=() - compopt -o default - elif [[ $type == 'plain' ]]; then - COMPREPLY+=($value) - fi - done - - return 0 -} - -%(complete_func)s_setup() { - complete -o nosort -F %(complete_func)s %(prog_name)s -} - -%(complete_func)s_setup; -""" - -_SOURCE_ZSH = """\ -#compdef %(prog_name)s - -%(complete_func)s() { - local -a completions - local -a completions_with_descriptions - local -a response - (( ! $+commands[%(prog_name)s] )) && return 1 - - response=("${(@f)$(env COMP_WORDS="${words[*]}" COMP_CWORD=$((CURRENT-1)) \ -%(complete_var)s=zsh_complete %(prog_name)s)}") - - for type key descr in ${response}; do - if [[ "$type" == "plain" ]]; then - if [[ "$descr" == "_" ]]; then - completions+=("$key") - else - completions_with_descriptions+=("$key":"$descr") - fi - elif [[ "$type" == "dir" ]]; then - _path_files -/ - elif [[ "$type" == "file" ]]; then - _path_files -f - fi - done - - if [ -n "$completions_with_descriptions" ]; then - _describe -V unsorted completions_with_descriptions -U - fi - - if [ -n "$completions" ]; then - compadd -U -V unsorted -a completions - fi -} - -if [[ $zsh_eval_context[-1] == loadautofunc ]]; then - # autoload from fpath, call function directly - %(complete_func)s "$@" -else - # eval/source/. command, register function for later - compdef %(complete_func)s %(prog_name)s -fi -""" - -_SOURCE_FISH = """\ -function %(complete_func)s; - set -l response (env %(complete_var)s=fish_complete COMP_WORDS=(commandline -cp) \ -COMP_CWORD=(commandline -t) %(prog_name)s); - - for completion in $response; - set -l metadata (string split "," $completion); - - if test $metadata[1] = "dir"; - __fish_complete_directories $metadata[2]; - else if test $metadata[1] = "file"; - __fish_complete_path $metadata[2]; - else if test $metadata[1] = "plain"; - echo $metadata[2]; - end; - end; -end; - -complete --no-files --command %(prog_name)s --arguments \ -"(%(complete_func)s)"; -""" - - -class ShellComplete: - """Base class for providing shell completion support. A subclass for - a given shell will override attributes and methods to implement the - completion instructions (``source`` and ``complete``). - - :param cli: Command being called. - :param prog_name: Name of the executable in the shell. - :param complete_var: Name of the environment variable that holds - the completion instruction. - - .. versionadded:: 8.0 - """ - - name: t.ClassVar[str] - """Name to register the shell as with :func:`add_completion_class`. - This is used in completion instructions (``{name}_source`` and - ``{name}_complete``). - """ - - source_template: t.ClassVar[str] - """Completion script template formatted by :meth:`source`. This must - be provided by subclasses. - """ - - def __init__( - self, - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - complete_var: str, - ) -> None: - self.cli = cli - self.ctx_args = ctx_args - self.prog_name = prog_name - self.complete_var = complete_var - - @property - def func_name(self) -> str: - """The name of the shell function defined by the completion - script. - """ - safe_name = re.sub(r"\W*", "", self.prog_name.replace("-", "_"), flags=re.ASCII) - return f"_{safe_name}_completion" - - def source_vars(self) -> t.Dict[str, t.Any]: - """Vars for formatting :attr:`source_template`. - - By default this provides ``complete_func``, ``complete_var``, - and ``prog_name``. - """ - return { - "complete_func": self.func_name, - "complete_var": self.complete_var, - "prog_name": self.prog_name, - } - - def source(self) -> str: - """Produce the shell script that defines the completion - function. By default this ``%``-style formats - :attr:`source_template` with the dict returned by - :meth:`source_vars`. - """ - return self.source_template % self.source_vars() - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - """Use the env vars defined by the shell script to return a - tuple of ``args, incomplete``. This must be implemented by - subclasses. - """ - raise NotImplementedError - - def get_completions( - self, args: t.List[str], incomplete: str - ) -> t.List[CompletionItem]: - """Determine the context and last complete command or parameter - from the complete args. Call that object's ``shell_complete`` - method to get the completions for the incomplete value. - - :param args: List of complete args before the incomplete value. - :param incomplete: Value being completed. May be empty. - """ - ctx = _resolve_context(self.cli, self.ctx_args, self.prog_name, args) - obj, incomplete = _resolve_incomplete(ctx, args, incomplete) - return obj.shell_complete(ctx, incomplete) - - def format_completion(self, item: CompletionItem) -> str: - """Format a completion item into the form recognized by the - shell script. This must be implemented by subclasses. - - :param item: Completion item to format. - """ - raise NotImplementedError - - def complete(self) -> str: - """Produce the completion data to send back to the shell. - - By default this calls :meth:`get_completion_args`, gets the - completions, then calls :meth:`format_completion` for each - completion. - """ - args, incomplete = self.get_completion_args() - completions = self.get_completions(args, incomplete) - out = [self.format_completion(item) for item in completions] - return "\n".join(out) - - -class BashComplete(ShellComplete): - """Shell completion for Bash.""" - - name = "bash" - source_template = _SOURCE_BASH - - @staticmethod - def _check_version() -> None: - import subprocess - - output = subprocess.run( - ["bash", "-c", 'echo "${BASH_VERSION}"'], stdout=subprocess.PIPE - ) - match = re.search(r"^(\d+)\.(\d+)\.\d+", output.stdout.decode()) - - if match is not None: - major, minor = match.groups() - - if major < "4" or major == "4" and minor < "4": - echo( - _( - "Shell completion is not supported for Bash" - " versions older than 4.4." - ), - err=True, - ) - else: - echo( - _("Couldn't detect Bash version, shell completion is not supported."), - err=True, - ) - - def source(self) -> str: - self._check_version() - return super().source() - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - cword = int(os.environ["COMP_CWORD"]) - args = cwords[1:cword] - - try: - incomplete = cwords[cword] - except IndexError: - incomplete = "" - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - return f"{item.type},{item.value}" - - -class ZshComplete(ShellComplete): - """Shell completion for Zsh.""" - - name = "zsh" - source_template = _SOURCE_ZSH - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - cword = int(os.environ["COMP_CWORD"]) - args = cwords[1:cword] - - try: - incomplete = cwords[cword] - except IndexError: - incomplete = "" - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - return f"{item.type}\n{item.value}\n{item.help if item.help else '_'}" - - -class FishComplete(ShellComplete): - """Shell completion for Fish.""" - - name = "fish" - source_template = _SOURCE_FISH - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - incomplete = os.environ["COMP_CWORD"] - args = cwords[1:] - - # Fish stores the partial word in both COMP_WORDS and - # COMP_CWORD, remove it from complete args. - if incomplete and args and args[-1] == incomplete: - args.pop() - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - if item.help: - return f"{item.type},{item.value}\t{item.help}" - - return f"{item.type},{item.value}" - - -ShellCompleteType = t.TypeVar("ShellCompleteType", bound=t.Type[ShellComplete]) - - -_available_shells: t.Dict[str, t.Type[ShellComplete]] = { - "bash": BashComplete, - "fish": FishComplete, - "zsh": ZshComplete, -} - - -def add_completion_class( - cls: ShellCompleteType, name: t.Optional[str] = None -) -> ShellCompleteType: - """Register a :class:`ShellComplete` subclass under the given name. - The name will be provided by the completion instruction environment - variable during completion. - - :param cls: The completion class that will handle completion for the - shell. - :param name: Name to register the class under. Defaults to the - class's ``name`` attribute. - """ - if name is None: - name = cls.name - - _available_shells[name] = cls - - return cls - - -def get_completion_class(shell: str) -> t.Optional[t.Type[ShellComplete]]: - """Look up a registered :class:`ShellComplete` subclass by the name - provided by the completion instruction environment variable. If the - name isn't registered, returns ``None``. - - :param shell: Name the class is registered under. - """ - return _available_shells.get(shell) - - -def _is_incomplete_argument(ctx: Context, param: Parameter) -> bool: - """Determine if the given parameter is an argument that can still - accept values. - - :param ctx: Invocation context for the command represented by the - parsed complete args. - :param param: Argument object being checked. - """ - if not isinstance(param, Argument): - return False - - assert param.name is not None - # Will be None if expose_value is False. - value = ctx.params.get(param.name) - return ( - param.nargs == -1 - or ctx.get_parameter_source(param.name) is not ParameterSource.COMMANDLINE - or ( - param.nargs > 1 - and isinstance(value, (tuple, list)) - and len(value) < param.nargs - ) - ) - - -def _start_of_option(ctx: Context, value: str) -> bool: - """Check if the value looks like the start of an option.""" - if not value: - return False - - c = value[0] - return c in ctx._opt_prefixes - - -def _is_incomplete_option(ctx: Context, args: t.List[str], param: Parameter) -> bool: - """Determine if the given parameter is an option that needs a value. - - :param args: List of complete args before the incomplete value. - :param param: Option object being checked. - """ - if not isinstance(param, Option): - return False - - if param.is_flag or param.count: - return False - - last_option = None - - for index, arg in enumerate(reversed(args)): - if index + 1 > param.nargs: - break - - if _start_of_option(ctx, arg): - last_option = arg - - return last_option is not None and last_option in param.opts - - -def _resolve_context( - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - args: t.List[str], -) -> Context: - """Produce the context hierarchy starting with the command and - traversing the complete arguments. This only follows the commands, - it doesn't trigger input prompts or callbacks. - - :param cli: Command being called. - :param prog_name: Name of the executable in the shell. - :param args: List of complete args before the incomplete value. - """ - ctx_args["resilient_parsing"] = True - ctx = cli.make_context(prog_name, args.copy(), **ctx_args) - args = ctx.protected_args + ctx.args - - while args: - command = ctx.command - - if isinstance(command, MultiCommand): - if not command.chain: - name, cmd, args = command.resolve_command(ctx, args) - - if cmd is None: - return ctx - - ctx = cmd.make_context(name, args, parent=ctx, resilient_parsing=True) - args = ctx.protected_args + ctx.args - else: - sub_ctx = ctx - - while args: - name, cmd, args = command.resolve_command(ctx, args) - - if cmd is None: - return ctx - - sub_ctx = cmd.make_context( - name, - args, - parent=ctx, - allow_extra_args=True, - allow_interspersed_args=False, - resilient_parsing=True, - ) - args = sub_ctx.args - - ctx = sub_ctx - args = [*sub_ctx.protected_args, *sub_ctx.args] - else: - break - - return ctx - - -def _resolve_incomplete( - ctx: Context, args: t.List[str], incomplete: str -) -> t.Tuple[t.Union[BaseCommand, Parameter], str]: - """Find the Click object that will handle the completion of the - incomplete value. Return the object and the incomplete value. - - :param ctx: Invocation context for the command represented by - the parsed complete args. - :param args: List of complete args before the incomplete value. - :param incomplete: Value being completed. May be empty. - """ - # Different shells treat an "=" between a long option name and - # value differently. Might keep the value joined, return the "=" - # as a separate item, or return the split name and value. Always - # split and discard the "=" to make completion easier. - if incomplete == "=": - incomplete = "" - elif "=" in incomplete and _start_of_option(ctx, incomplete): - name, _, incomplete = incomplete.partition("=") - args.append(name) - - # The "--" marker tells Click to stop treating values as options - # even if they start with the option character. If it hasn't been - # given and the incomplete arg looks like an option, the current - # command will provide option name completions. - if "--" not in args and _start_of_option(ctx, incomplete): - return ctx.command, incomplete - - params = ctx.command.get_params(ctx) - - # If the last complete arg is an option name with an incomplete - # value, the option will provide value completions. - for param in params: - if _is_incomplete_option(ctx, args, param): - return param, incomplete - - # It's not an option name or value. The first argument without a - # parsed value will provide value completions. - for param in params: - if _is_incomplete_argument(ctx, param): - return param, incomplete - - # There were no unparsed arguments, the command may be a group that - # will provide command name completions. - return ctx.command, incomplete diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/M_V_A_R_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/M_V_A_R_.py deleted file mode 100644 index 8371795eb2f2d2c233ec1725b8a2c21453170f23..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/M_V_A_R_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_M_V_A_R_(BaseTTXConverter): - pass diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/schema/base.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/schema/base.py deleted file mode 100644 index f4349917a2973aa4f2b421de8179f5e7009311f9..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/schema/base.py +++ /dev/null @@ -1,36 +0,0 @@ -"""Base schema for readers.""" -from dataclasses import dataclass - -from langchain.docstore.document import Document as LCDocument - -from gpt_index.schema import BaseDocument - - -@dataclass -class Document(BaseDocument): - """Generic interface for a data document. - - This document connects to data sources. - - """ - - def __post_init__(self) -> None: - """Post init.""" - super().__post_init__() - if self.text is None: - raise ValueError("text field not set.") - - @classmethod - def get_type(cls) -> str: - """Get Document type.""" - return "Document" - - def to_langchain_format(self) -> LCDocument: - """Convert struct to LangChain document format.""" - metadata = self.extra_info or {} - return LCDocument(page_content=self.text, metadata=metadata) - - @classmethod - def from_langchain_format(cls, doc: LCDocument) -> "Document": - """Convert struct from LangChain document format.""" - return cls(text=doc.page_content, extra_info=doc.metadata) diff --git a/spaces/johnberg/CLIPInverter/models/encoders/__init__.py b/spaces/johnberg/CLIPInverter/models/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jojoanne/cuisinerecommendation/README.md b/spaces/jojoanne/cuisinerecommendation/README.md deleted file mode 100644 index 95e9ffa862e51ca142778595d64f296adfdd3398..0000000000000000000000000000000000000000 --- a/spaces/jojoanne/cuisinerecommendation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cuisinerecommendation -emoji: 📈 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/joushe/moe-tts/attentions.py b/spaces/joushe/moe-tts/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/joushe/moe-tts/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/tests/__init__.py b/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kepl/gpt/client/js/icons.js b/spaces/kepl/gpt/client/js/icons.js deleted file mode 100644 index 84fed38dd35e0d0203370a8314a360d27f350dd6..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/client/js/icons.js +++ /dev/null @@ -1 +0,0 @@ -window.FontAwesomeKitConfig={asyncLoading:{enabled:!1},autoA11y:{enabled:!0},baseUrl:"https://ka-f.fontawesome.com",baseUrlKit:"https://kit-pro.fontawesome.com",detectConflictsUntil:null,iconUploads:{},id:96462084,license:"pro",method:"css",minify:{enabled:!0},token:"d0514f1901",v4FontFaceShim:{enabled:!0},v4shim:{enabled:!0},v5FontFaceShim:{enabled:!0},version:"6.1.1"},function(t){"function"==typeof define&&define.amd?define("kit-loader",t):t()}(function(){"use strict";function t(e){return(t="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t})(e)}function e(t,e,n){return e in t?Object.defineProperty(t,e,{value:n,enumerable:!0,configurable:!0,writable:!0}):t[e]=n,t}function n(t,e){var n=Object.keys(t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(t);e&&(o=o.filter(function(e){return Object.getOwnPropertyDescriptor(t,e).enumerable})),n.push.apply(n,o)}return n}function o(t){for(var o=1;ot.length)&&(e=t.length);for(var n=0,o=new Array(e);n2&&void 0!==arguments[2]?arguments[2]:function(){},r=e.document||r,i=a.bind(a,r,["fa","fab","fas","far","fal","fad","fak"]),u=Object.keys(t.iconUploads||{}).length>0;t.autoA11y.enabled&&n(i);var f=[{id:"fa-main",addOn:void 0}];t.v4shim&&t.v4shim.enabled&&f.push({id:"fa-v4-shims",addOn:"-v4-shims"}),t.v5FontFaceShim&&t.v5FontFaceShim.enabled&&f.push({id:"fa-v5-font-face",addOn:"-v5-font-face"}),t.v4FontFaceShim&&t.v4FontFaceShim.enabled&&f.push({id:"fa-v4-font-face",addOn:"-v4-font-face"}),u&&f.push({id:"fa-kit-upload",customCss:!0});var s=f.map(function(n){return new F(function(r,i){E(n.customCss?function(t){return t.baseUrlKit+"/"+t.token+"/"+t.id+"/kit-upload.css"}(t):c(t,{addOn:n.addOn,minify:t.minify.enabled}),e).then(function(i){r(function(t,e){var n=e.contentFilter||function(t,e){return t},o=document.createElement("style"),r=document.createTextNode(n(t,e));return o.appendChild(r),o.media="all",e.id&&o.setAttribute("id",e.id),e&&e.detectingConflicts&&e.detectionIgnoreAttr&&o.setAttributeNode(document.createAttribute(e.detectionIgnoreAttr)),o}(i,o(o({},e),{},{baseUrl:t.baseUrl,version:t.version,id:n.id,contentFilter:function(t,e){return _(t,e.baseUrl,e.version)}})))}).catch(i)})});return F.all(s)}function P(t,e){var n=document.createElement("SCRIPT"),o=document.createTextNode(t);return n.appendChild(o),n.referrerPolicy="strict-origin",e.id&&n.setAttribute("id",e.id),e&&e.detectingConflicts&&e.detectionIgnoreAttr&&n.setAttributeNode(document.createAttribute(e.detectionIgnoreAttr)),n}function U(t){var e,n=[],o=document,r=(o.documentElement.doScroll?/^loaded|^c/:/^loaded|^i|^c/).test(o.readyState);r||o.addEventListener("DOMContentLoaded",e=function(){for(o.removeEventListener("DOMContentLoaded",e),r=1;e=n.shift();)e()}),r?setTimeout(t,0):n.push(t)}try{if(window.FontAwesomeKitConfig){var k=window.FontAwesomeKitConfig,L={detectingConflicts:k.detectConflictsUntil&&new Date<=new Date(k.detectConflictsUntil),detectionIgnoreAttr:"data-fa-detection-ignore",fetch:window.fetch,token:k.token,XMLHttpRequest:window.XMLHttpRequest,document:document},I=document.currentScript,T=I?I.parentElement:document.head;(function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{};return"js"===t.method?function(t,e){e.autoA11y=t.autoA11y.enabled,"pro"===t.license&&(e.autoFetchSvg=!0,e.fetchSvgFrom=t.baseUrl+"/releases/"+("latest"===t.version?"latest":"v".concat(t.version))+"/svgs",e.fetchUploadedSvgFrom=t.uploadsUrl);var n=[];return t.v4shim.enabled&&n.push(new F(function(n,r){E(c(t,{addOn:"-v4-shims",minify:t.minify.enabled}),e).then(function(t){n(P(t,o(o({},e),{},{id:"fa-v4-shims"})))}).catch(r)})),n.push(new F(function(n,r){E(c(t,{minify:t.minify.enabled}),e).then(function(t){var r=P(t,o(o({},e),{},{id:"fa-main"}));n(function(t,e){var n=e&&void 0!==e.autoFetchSvg?e.autoFetchSvg:void 0,o=e&&void 0!==e.autoA11y?e.autoA11y:void 0;return void 0!==o&&t.setAttribute("data-auto-a11y",o?"true":"false"),n&&(t.setAttributeNode(document.createAttribute("data-auto-fetch-svg")),t.setAttribute("data-fetch-svg-from",e.fetchSvgFrom),t.setAttribute("data-fetch-uploaded-svg-from",e.fetchUploadedSvgFrom)),t}(r,e))}).catch(r)})),F.all(n)}(t,e):"css"===t.method?C(t,e,function(t){U(t),function(t){"undefined"!=typeof MutationObserver&&new MutationObserver(t).observe(document,{childList:!0,subtree:!0})}(t)}):void 0})(k,L).then(function(t){t.map(function(t){try{T.insertBefore(t,I?I.nextSibling:null)}catch(e){T.appendChild(t)}}),L.detectingConflicts&&I&&U(function(){I.setAttributeNode(document.createAttribute(L.detectionIgnoreAttr));var t=function(t,e){var n=document.createElement("script");return e&&e.detectionIgnoreAttr&&n.setAttributeNode(document.createAttribute(e.detectionIgnoreAttr)),n.src=c(t,{baseFilename:"conflict-detection",fileSuffix:"js",subdir:"js",minify:t.minify.enabled}),n}(k,L);document.body.appendChild(t)})}).catch(function(t){console.error("".concat("Font Awesome Kit:"," ").concat(t))})}}catch(t){console.error("".concat("Font Awesome Kit:"," ").concat(t))}}); \ No newline at end of file diff --git a/spaces/keremberke/awesome-yolov8-models/app.py b/spaces/keremberke/awesome-yolov8-models/app.py deleted file mode 100644 index 093336784777abeda1a9c6a6bf464e7e2841c53c..0000000000000000000000000000000000000000 --- a/spaces/keremberke/awesome-yolov8-models/app.py +++ /dev/null @@ -1,219 +0,0 @@ -import os -from pathlib import Path -import gradio as gr -from datasets import load_dataset -from ultralyticsplus import YOLO, render_result, postprocess_classify_output - -from utils import load_models_from_txt_files, get_dataset_id_from_model_id, get_task_from_readme - -EXAMPLE_IMAGE_DIR = 'example_images' - -DEFAULT_DET_MODEL_ID = 'keremberke/yolov8m-valorant-detection' -DEFAULT_DET_DATASET_ID = 'keremberke/valorant-object-detection' -DEFAULT_SEG_MODEL_ID = 'keremberke/yolov8s-building-segmentation' -DEFAULT_SEG_DATASET_ID = 'keremberke/satellite-building-segmentation' -DEFAULT_CLS_MODEL_ID = 'keremberke/yolov8m-chest-xray-classification' -DEFAULT_CLS_DATASET_ID = 'keremberke/chest-xray-classification' - -# load model ids and default models -det_model_ids, seg_model_ids, cls_model_ids = load_models_from_txt_files() -task_to_model_ids = {'detect': det_model_ids, 'segment': seg_model_ids, 'classify': cls_model_ids} -det_model = YOLO(DEFAULT_DET_MODEL_ID) -det_model_id = DEFAULT_DET_MODEL_ID -seg_model = YOLO(DEFAULT_SEG_MODEL_ID) -seg_model_id = DEFAULT_SEG_MODEL_ID -cls_model = YOLO(DEFAULT_CLS_MODEL_ID) -cls_model_id = DEFAULT_CLS_MODEL_ID - - -def get_examples(task): - examples = [] - Path(EXAMPLE_IMAGE_DIR).mkdir(parents=True, exist_ok=True) - image_ind = 0 - for model_id in task_to_model_ids[task]: - dataset_id = get_dataset_id_from_model_id(model_id) - ds = load_dataset(dataset_id, name="mini")["validation"] - for ind in range(min(2, len(ds))): - jpeg_image_file = ds[ind]["image"] - image_file_path = str(Path(EXAMPLE_IMAGE_DIR) / f"{task}_example_{image_ind}.jpg") - jpeg_image_file.save(image_file_path, format='JPEG', quality=100) - image_path = os.path.abspath(image_file_path) - examples.append([image_path, model_id, 0.25]) - image_ind += 1 - return examples - - -# load default examples using default datasets -det_examples = get_examples('detect') -seg_examples = get_examples('segment') -cls_examples = get_examples('classify') - - -def predict(image, model_id, threshold): - """Perform inference on image.""" - # set task - if model_id in det_model_ids: - task = 'detect' - elif model_id in seg_model_ids: - task = 'segment' - elif model_id in cls_model_ids: - task = 'classify' - else: - raise ValueError(f"Invalid model_id: {model_id}") - - # set model - if task == 'detect': - global det_model - global det_model_id - if model_id != det_model_id: - det_model = YOLO(model_id) - det_model_id = model_id - model = det_model - elif task == 'segment': - global seg_model - global seg_model_id - if model_id != seg_model_id: - seg_model = YOLO(model_id) - seg_model_id = model_id - model = seg_model - elif task == 'classify': - global cls_model - global cls_model_id - if model_id != cls_model_id: - cls_model = YOLO(model_id) - cls_model_id = model_id - model = cls_model - else: - raise ValueError(f"Invalid task: {task}") - - # set model parameters - model.overrides['conf'] = threshold - - # perform inference - results = model.predict(image) - print(model_id) - print(task) - - if task in ['detect', 'segment']: - # draw predictions - output = render_result(model=model, image=image, result=results[0]) - elif task == 'classify': - # postprocess classification output - output = postprocess_classify_output(model, result=results[0]) - else: - raise ValueError(f"Invalid task: {task}") - - return output - - -with gr.Blocks() as demo: - gr.Markdown("""#

            -

            -
            project website | project github -

            -

            - Follow me for more! -
            twitter | github | linkedin -

            - """) - with gr.Tab("Detection"): - with gr.Row(): - with gr.Column(): - detect_input = gr.Image() - detect_model_id = gr.Dropdown(choices=det_model_ids, label="Model:", value=DEFAULT_DET_MODEL_ID, interactive=True) - detect_threshold = gr.Slider(maximum=1, step=0.01, value=0.25, label="Threshold:", interactive=True) - detect_button = gr.Button("Detect!") - with gr.Column(): - detect_output = gr.Image(label="Predictions:", interactive=False) - with gr.Row(): - half_ind = int(len(det_examples) / 2) - with gr.Column(): - gr.Examples( - det_examples[half_ind:], - inputs=[detect_input, detect_model_id, detect_threshold], - outputs=detect_output, - fn=predict, - cache_examples=False, - run_on_click=False, - ) - with gr.Column(): - gr.Examples( - det_examples[:half_ind], - inputs=[detect_input, detect_model_id, detect_threshold], - outputs=detect_output, - fn=predict, - cache_examples=False, - run_on_click=False, - ) - with gr.Tab("Segmentation"): - with gr.Row(): - with gr.Column(): - segment_input = gr.Image() - segment_model_id = gr.Dropdown(choices=seg_model_ids, label="Model:", value=DEFAULT_SEG_MODEL_ID, interactive=True) - segment_threshold = gr.Slider(maximum=1, step=0.01, value=0.25, label="Threshold:", interactive=True) - segment_button = gr.Button("Segment!") - with gr.Column(): - segment_output = gr.Image(label="Predictions:", interactive=False) - with gr.Row(): - half_ind = int(len(seg_examples) / 2) - with gr.Column(): - gr.Examples( - seg_examples[half_ind:], - inputs=[segment_input, segment_model_id, segment_threshold], - outputs=segment_output, - fn=predict, - cache_examples=False, - run_on_click=False, - ) - with gr.Column(): - gr.Examples( - seg_examples[:half_ind], - inputs=[segment_input, segment_model_id, segment_threshold], - outputs=segment_output, - fn=predict, - cache_examples=False, - run_on_click=False, - ) - with gr.Tab("Classification"): - with gr.Row(): - with gr.Column(): - classify_input = gr.Image() - classify_model_id = gr.Dropdown(choices=cls_model_ids, label="Model:", value=DEFAULT_CLS_MODEL_ID, interactive=True) - classify_threshold = gr.Slider(maximum=1, step=0.01, value=0.25, label="Threshold:", interactive=True) - classify_button = gr.Button("Classify!") - with gr.Column(): - classify_output = gr.Label( - label="Predictions:", show_label=True, num_top_classes=5 - ) - with gr.Row(): - half_ind = int(len(cls_examples) / 2) - with gr.Column(): - gr.Examples( - cls_examples[half_ind:], - inputs=[classify_input, classify_model_id, classify_threshold], - outputs=classify_output, - fn=predict, - cache_examples=False, - run_on_click=False, - ) - with gr.Column(): - gr.Examples( - cls_examples[:half_ind], - inputs=[classify_input, classify_model_id, classify_threshold], - outputs=classify_output, - fn=predict, - cache_examples=False, - run_on_click=False, - ) - - detect_button.click( - predict, inputs=[detect_input, detect_model_id, detect_threshold], outputs=detect_output, api_name="detect" - ) - segment_button.click( - predict, inputs=[segment_input, segment_model_id, segment_threshold], outputs=segment_output, api_name="segment" - ) - classify_button.click( - predict, inputs=[classify_input, classify_model_id, classify_threshold], outputs=classify_output, api_name="classify" - ) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/kevinwang676/Bark-Voice-Cloning/cloning/__init__.py b/spaces/kevinwang676/Bark-Voice-Cloning/cloning/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/data_objects/random_cycler.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/data_objects/random_cycler.py deleted file mode 100644 index c405db6b27f46d874d8feb37e3f9c1e12c251109..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/data_objects/random_cycler.py +++ /dev/null @@ -1,37 +0,0 @@ -import random - -class RandomCycler: - """ - Creates an internal copy of a sequence and allows access to its items in a constrained random - order. For a source sequence of n items and one or several consecutive queries of a total - of m items, the following guarantees hold (one implies the other): - - Each item will be returned between m // n and ((m - 1) // n) + 1 times. - - Between two appearances of the same item, there may be at most 2 * (n - 1) other items. - """ - - def __init__(self, source): - if len(source) == 0: - raise Exception("Can't create RandomCycler from an empty collection") - self.all_items = list(source) - self.next_items = [] - - def sample(self, count: int): - shuffle = lambda l: random.sample(l, len(l)) - - out = [] - while count > 0: - if count >= len(self.all_items): - out.extend(shuffle(list(self.all_items))) - count -= len(self.all_items) - continue - n = min(count, len(self.next_items)) - out.extend(self.next_items[:n]) - count -= n - self.next_items = self.next_items[n:] - if len(self.next_items) == 0: - self.next_items = shuffle(list(self.all_items)) - return out - - def __next__(self): - return self.sample(1)[0] - diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/tts_voice.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/tts_voice.py deleted file mode 100644 index 8ee194c252f82ada41ccc14f33adb592e1a00985..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/tts_voice.py +++ /dev/null @@ -1,26 +0,0 @@ -tts_order_voice = {'英语 (美国)-Jenny-女': 'en-US-JennyNeural', - '英语 (美国)-Guy-男': 'en-US-GuyNeural', - '英语 (美国)-Ana-女': 'en-US-AnaNeural', - '英语 (美国)-Aria-女': 'en-US-AriaNeural', - '英语 (美国)-Christopher-男': 'en-US-ChristopherNeural', - '英语 (美国)-Eric-男': 'en-US-EricNeural', - '英语 (美国)-Michelle-女': 'en-US-MichelleNeural', - '英语 (美国)-Roger-男': 'en-US-RogerNeural', - '韩语 (韩国)-Sun-Hi-女': 'ko-KR-SunHiNeural', - '韩语 (韩国)-InJoon-男': 'ko-KR-InJoonNeural', - '日语 (日本)-Nanami-女': 'ja-JP-NanamiNeural', - '日语 (日本)-Keita-男': 'ja-JP-KeitaNeural', - '普通话 (中国大陆)-Xiaoxiao-女': 'zh-CN-XiaoxiaoNeural', - '普通话 (中国大陆)-Yunyang-男': 'zh-CN-YunyangNeural', - '普通话 (中国大陆)-Yunxi-男': 'zh-CN-YunxiNeural', - '普通话 (中国大陆)-Xiaoyi-女': 'zh-CN-XiaoyiNeural', - '普通话 (中国大陆)-Yunjian-男': 'zh-CN-YunjianNeural', - '普通话 (中国大陆)-Yunxia-男': 'zh-CN-YunxiaNeural', - '东北话 (中国大陆)-Xiaobei-女': 'zh-CN-liaoning-XiaobeiNeural', - '中原官话 (中国陕西)-Xiaoni-女': 'zh-CN-shaanxi-XiaoniNeural', - '粤语 (中国香港)-HiuMaan-女': 'zh-HK-HiuMaanNeural', - '粤语 (中国香港)-HiuGaai-女': 'zh-HK-HiuGaaiNeural', - '粤语 (中国香港)-WanLung-男': 'zh-HK-WanLungNeural', - '台湾普通话-HsiaoChen-女': 'zh-TW-HsiaoChenNeural', - '台湾普通话-HsiaoYu-女': 'zh-TW-HsiaoYuNeural', - '台湾普通话-YunJhe-男': 'zh-TW-YunJheNeural'} diff --git a/spaces/kevinwang676/FreeVC-en/speaker_encoder/train.py b/spaces/kevinwang676/FreeVC-en/speaker_encoder/train.py deleted file mode 100644 index 282e4f51b3825c7f32e628506eb40a98e58e2deb..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/FreeVC-en/speaker_encoder/train.py +++ /dev/null @@ -1,125 +0,0 @@ -from speaker_encoder.visualizations import Visualizations -from speaker_encoder.data_objects import SpeakerVerificationDataLoader, SpeakerVerificationDataset -from speaker_encoder.params_model import * -from speaker_encoder.model import SpeakerEncoder -from utils.profiler import Profiler -from pathlib import Path -import torch - -def sync(device: torch.device): - # FIXME - return - # For correct profiling (cuda operations are async) - if device.type == "cuda": - torch.cuda.synchronize(device) - -def train(run_id: str, clean_data_root: Path, models_dir: Path, umap_every: int, save_every: int, - backup_every: int, vis_every: int, force_restart: bool, visdom_server: str, - no_visdom: bool): - # Create a dataset and a dataloader - dataset = SpeakerVerificationDataset(clean_data_root) - loader = SpeakerVerificationDataLoader( - dataset, - speakers_per_batch, # 64 - utterances_per_speaker, # 10 - num_workers=8, - ) - - # Setup the device on which to run the forward pass and the loss. These can be different, - # because the forward pass is faster on the GPU whereas the loss is often (depending on your - # hyperparameters) faster on the CPU. - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - # FIXME: currently, the gradient is None if loss_device is cuda - loss_device = torch.device("cpu") - - # Create the model and the optimizer - model = SpeakerEncoder(device, loss_device) - optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate_init) - init_step = 1 - - # Configure file path for the model - state_fpath = models_dir.joinpath(run_id + ".pt") - backup_dir = models_dir.joinpath(run_id + "_backups") - - # Load any existing model - if not force_restart: - if state_fpath.exists(): - print("Found existing model \"%s\", loading it and resuming training." % run_id) - checkpoint = torch.load(state_fpath) - init_step = checkpoint["step"] - model.load_state_dict(checkpoint["model_state"]) - optimizer.load_state_dict(checkpoint["optimizer_state"]) - optimizer.param_groups[0]["lr"] = learning_rate_init - else: - print("No model \"%s\" found, starting training from scratch." % run_id) - else: - print("Starting the training from scratch.") - model.train() - - # Initialize the visualization environment - vis = Visualizations(run_id, vis_every, server=visdom_server, disabled=no_visdom) - vis.log_dataset(dataset) - vis.log_params() - device_name = str(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU") - vis.log_implementation({"Device": device_name}) - - # Training loop - profiler = Profiler(summarize_every=10, disabled=False) - for step, speaker_batch in enumerate(loader, init_step): - profiler.tick("Blocking, waiting for batch (threaded)") - - # Forward pass - inputs = torch.from_numpy(speaker_batch.data).to(device) - sync(device) - profiler.tick("Data to %s" % device) - embeds = model(inputs) - sync(device) - profiler.tick("Forward pass") - embeds_loss = embeds.view((speakers_per_batch, utterances_per_speaker, -1)).to(loss_device) - loss, eer = model.loss(embeds_loss) - sync(loss_device) - profiler.tick("Loss") - - # Backward pass - model.zero_grad() - loss.backward() - profiler.tick("Backward pass") - model.do_gradient_ops() - optimizer.step() - profiler.tick("Parameter update") - - # Update visualizations - # learning_rate = optimizer.param_groups[0]["lr"] - vis.update(loss.item(), eer, step) - - # Draw projections and save them to the backup folder - if umap_every != 0 and step % umap_every == 0: - print("Drawing and saving projections (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - projection_fpath = backup_dir.joinpath("%s_umap_%06d.png" % (run_id, step)) - embeds = embeds.detach().cpu().numpy() - vis.draw_projections(embeds, utterances_per_speaker, step, projection_fpath) - vis.save() - - # Overwrite the latest version of the model - if save_every != 0 and step % save_every == 0: - print("Saving the model (step %d)" % step) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, state_fpath) - - # Make a backup - if backup_every != 0 and step % backup_every == 0: - print("Making a backup (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - backup_fpath = backup_dir.joinpath("%s_bak_%06d.pt" % (run_id, step)) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, backup_fpath) - - profiler.tick("Extras (visualizations, saving)") - \ No newline at end of file diff --git a/spaces/kevinwang676/Voice-Cloning-for-Bilibili/README.md b/spaces/kevinwang676/Voice-Cloning-for-Bilibili/README.md deleted file mode 100644 index bbaa1ca3564258399c6c3aa442bf17c650ce5ada..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Voice-Cloning-for-Bilibili/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Voice Cloning -emoji: 😻 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: merve/voice-cloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kingabzpro/real-time-Urdu-ASR/README.md b/spaces/kingabzpro/real-time-Urdu-ASR/README.md deleted file mode 100644 index 5d84e0a34088acf7ad999e1b1eff6074ea2271c9..0000000000000000000000000000000000000000 --- a/spaces/kingabzpro/real-time-Urdu-ASR/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Streaming Urdu Asr -emoji: 🔊 -colorFrom: pink -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py deleted file mode 100644 index 6fc100c8f96e817a6ed2666f7c9f762af2463b48..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,109 +0,0 @@ -import os.path as osp - -from annotator.uniformer.mmcv.runner import DistEvalHook as _DistEvalHook -from annotator.uniformer.mmcv.runner import EvalHook as _EvalHook - - -class EvalHook(_EvalHook): - """Single GPU EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.efficient_test = efficient_test - - def after_train_iter(self, runner): - """After train epoch hook. - - Override default ``single_gpu_test``. - """ - if self.by_epoch or not self.every_n_iters(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import single_gpu_test - runner.log_buffer.clear() - results = single_gpu_test( - runner.model, - self.dataloader, - show=False, - efficient_test=self.efficient_test) - self.evaluate(runner, results) - - def after_train_epoch(self, runner): - """After train epoch hook. - - Override default ``single_gpu_test``. - """ - if not self.by_epoch or not self.every_n_epochs(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import single_gpu_test - runner.log_buffer.clear() - results = single_gpu_test(runner.model, self.dataloader, show=False) - self.evaluate(runner, results) - - -class DistEvalHook(_DistEvalHook): - """Distributed EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.efficient_test = efficient_test - - def after_train_iter(self, runner): - """After train epoch hook. - - Override default ``multi_gpu_test``. - """ - if self.by_epoch or not self.every_n_iters(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import multi_gpu_test - runner.log_buffer.clear() - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=osp.join(runner.work_dir, '.eval_hook'), - gpu_collect=self.gpu_collect, - efficient_test=self.efficient_test) - if runner.rank == 0: - print('\n') - self.evaluate(runner, results) - - def after_train_epoch(self, runner): - """After train epoch hook. - - Override default ``multi_gpu_test``. - """ - if not self.by_epoch or not self.every_n_epochs(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import multi_gpu_test - runner.log_buffer.clear() - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=osp.join(runner.work_dir, '.eval_hook'), - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - self.evaluate(runner, results) diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/helpers.py b/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/helpers.py deleted file mode 100644 index b51fdf97141407fcc1c9d249a086ddbfd042469f..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/helpers.py +++ /dev/null @@ -1,119 +0,0 @@ -from collections import namedtuple -import torch -from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module - -""" -ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/queueing.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/queueing.py deleted file mode 100644 index 525551b1a7b909b8268724581483339fb1e0a255..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/queueing.py +++ /dev/null @@ -1,470 +0,0 @@ -from __future__ import annotations - -import asyncio -import copy -import sys -import time -from asyncio import TimeoutError as AsyncTimeOutError -from collections import deque -from typing import Any, Deque - -import fastapi -import httpx - -from gradio.data_classes import Estimation, PredictBody, Progress, ProgressUnit -from gradio.helpers import TrackedIterable -from gradio.utils import AsyncRequest, run_coro_in_background, set_task_name - - -class Event: - def __init__( - self, - websocket: fastapi.WebSocket, - session_hash: str, - fn_index: int, - ): - self.websocket = websocket - self.session_hash: str = session_hash - self.fn_index: int = fn_index - self._id = f"{self.session_hash}_{self.fn_index}" - self.data: PredictBody | None = None - self.lost_connection_time: float | None = None - self.token: str | None = None - self.progress: Progress | None = None - self.progress_pending: bool = False - - async def disconnect(self, code: int = 1000): - await self.websocket.close(code=code) - - -class Queue: - def __init__( - self, - live_updates: bool, - concurrency_count: int, - update_intervals: float, - max_size: int | None, - blocks_dependencies: list, - ): - self.event_queue: Deque[Event] = deque() - self.events_pending_reconnection = [] - self.stopped = False - self.max_thread_count = concurrency_count - self.update_intervals = update_intervals - self.active_jobs: list[None | list[Event]] = [None] * concurrency_count - self.delete_lock = asyncio.Lock() - self.server_path = None - self.duration_history_total = 0 - self.duration_history_count = 0 - self.avg_process_time = 0 - self.avg_concurrent_process_time = None - self.queue_duration = 1 - self.live_updates = live_updates - self.sleep_when_free = 0.05 - self.progress_update_sleep_when_free = 0.1 - self.max_size = max_size - self.blocks_dependencies = blocks_dependencies - self.access_token = "" - self.queue_client = None - - async def start(self, progress_tracking=False, ssl_verify=True): - # So that the client is attached to the running event loop - self.queue_client = httpx.AsyncClient(verify=ssl_verify) - - run_coro_in_background(self.start_processing) - if progress_tracking: - run_coro_in_background(self.start_progress_tracking) - if not self.live_updates: - run_coro_in_background(self.notify_clients) - - def close(self): - self.stopped = True - - def resume(self): - self.stopped = False - - def set_url(self, url: str): - self.server_path = url - - def set_access_token(self, token: str): - self.access_token = token - - def get_active_worker_count(self) -> int: - count = 0 - for worker in self.active_jobs: - if worker is not None: - count += 1 - return count - - def get_events_in_batch(self) -> tuple[list[Event] | None, bool]: - if not (self.event_queue): - return None, False - - first_event = self.event_queue.popleft() - events = [first_event] - - event_fn_index = first_event.fn_index - batch = self.blocks_dependencies[event_fn_index]["batch"] - - if batch: - batch_size = self.blocks_dependencies[event_fn_index]["max_batch_size"] - rest_of_batch = [ - event for event in self.event_queue if event.fn_index == event_fn_index - ][: batch_size - 1] - events.extend(rest_of_batch) - [self.event_queue.remove(event) for event in rest_of_batch] - - return events, batch - - async def start_processing(self) -> None: - while not self.stopped: - if not self.event_queue: - await asyncio.sleep(self.sleep_when_free) - continue - - if None not in self.active_jobs: - await asyncio.sleep(self.sleep_when_free) - continue - # Using mutex to avoid editing a list in use - async with self.delete_lock: - events, batch = self.get_events_in_batch() - - if events: - self.active_jobs[self.active_jobs.index(None)] = events - task = run_coro_in_background(self.process_events, events, batch) - run_coro_in_background(self.broadcast_live_estimations) - set_task_name(task, events[0].session_hash, events[0].fn_index, batch) - - async def start_progress_tracking(self) -> None: - while not self.stopped: - if not any(self.active_jobs): - await asyncio.sleep(self.progress_update_sleep_when_free) - continue - - for job in self.active_jobs: - if job is None: - continue - for event in job: - if event.progress_pending and event.progress: - event.progress_pending = False - client_awake = await self.send_message( - event, event.progress.dict() - ) - if not client_awake: - await self.clean_event(event) - - await asyncio.sleep(self.progress_update_sleep_when_free) - - def set_progress( - self, - event_id: str, - iterables: list[TrackedIterable] | None, - ): - if iterables is None: - return - for job in self.active_jobs: - if job is None: - continue - for evt in job: - if evt._id == event_id: - progress_data: list[ProgressUnit] = [] - for iterable in iterables: - progress_unit = ProgressUnit( - index=iterable.index, - length=iterable.length, - unit=iterable.unit, - progress=iterable.progress, - desc=iterable.desc, - ) - progress_data.append(progress_unit) - evt.progress = Progress(progress_data=progress_data) - evt.progress_pending = True - - def push(self, event: Event) -> int | None: - """ - Add event to queue, or return None if Queue is full - Parameters: - event: Event to add to Queue - Returns: - rank of submitted Event - """ - queue_len = len(self.event_queue) - if self.max_size is not None and queue_len >= self.max_size: - return None - self.event_queue.append(event) - return queue_len - - async def clean_event(self, event: Event) -> None: - if event in self.event_queue: - async with self.delete_lock: - self.event_queue.remove(event) - - async def broadcast_live_estimations(self) -> None: - """ - Runs 2 functions sequentially instead of concurrently. Otherwise dced clients are tried to get deleted twice. - """ - if self.live_updates: - await self.broadcast_estimations() - - async def gather_event_data(self, event: Event, receive_timeout=60) -> bool: - """ - Gather data for the event - Parameters: - event: the Event to gather data for - receive_timeout: how long to wait for data to be received from frontend - """ - if not event.data: - client_awake = await self.send_message(event, {"msg": "send_data"}) - if not client_awake: - return False - data, client_awake = await self.get_message(event, timeout=receive_timeout) - if not client_awake: - # In the event, we timeout due to large data size - # Let the client know, otherwise will hang - await self.send_message( - event, - { - "msg": "process_completed", - "output": {"error": "Time out uploading data to server"}, - "success": False, - }, - ) - return False - event.data = data - return True - - async def notify_clients(self) -> None: - """ - Notify clients about events statuses in the queue periodically. - """ - while not self.stopped: - await asyncio.sleep(self.update_intervals) - if self.event_queue: - await self.broadcast_estimations() - - async def broadcast_estimations(self) -> None: - estimation = self.get_estimation() - # Send all messages concurrently - await asyncio.gather( - *[ - self.send_estimation(event, estimation, rank) - for rank, event in enumerate(self.event_queue) - ] - ) - - async def send_estimation( - self, event: Event, estimation: Estimation, rank: int - ) -> Estimation: - """ - Send estimation about ETA to the client. - - Parameters: - event: - estimation: - rank: - """ - estimation.rank = rank - - if self.avg_concurrent_process_time is not None: - estimation.rank_eta = ( - estimation.rank * self.avg_concurrent_process_time - + self.avg_process_time - ) - if None not in self.active_jobs: - # Add estimated amount of time for a thread to get empty - estimation.rank_eta += self.avg_concurrent_process_time - client_awake = await self.send_message(event, estimation.dict()) - if not client_awake: - await self.clean_event(event) - return estimation - - def update_estimation(self, duration: float) -> None: - """ - Update estimation by last x element's average duration. - - Parameters: - duration: - """ - self.duration_history_total += duration - self.duration_history_count += 1 - self.avg_process_time = ( - self.duration_history_total / self.duration_history_count - ) - self.avg_concurrent_process_time = self.avg_process_time / min( - self.max_thread_count, self.duration_history_count - ) - self.queue_duration = self.avg_concurrent_process_time * len(self.event_queue) - - def get_estimation(self) -> Estimation: - return Estimation( - queue_size=len(self.event_queue), - avg_event_process_time=self.avg_process_time, - avg_event_concurrent_process_time=self.avg_concurrent_process_time, - queue_eta=self.queue_duration, - ) - - def get_request_params(self, websocket: fastapi.WebSocket) -> dict[str, Any]: - return { - "url": str(websocket.url), - "headers": dict(websocket.headers), - "query_params": dict(websocket.query_params), - "path_params": dict(websocket.path_params), - "client": {"host": websocket.client.host, "port": websocket.client.port}, # type: ignore - } - - async def call_prediction(self, events: list[Event], batch: bool): - data = events[0].data - assert data is not None, "No event data" - token = events[0].token - data.event_id = events[0]._id if not batch else None - try: - data.request = self.get_request_params(events[0].websocket) - except ValueError: - pass - - if batch: - data.data = list(zip(*[event.data.data for event in events if event.data])) - data.request = [ - self.get_request_params(event.websocket) - for event in events - if event.data - ] - data.batched = True - response = await AsyncRequest( - method=AsyncRequest.Method.POST, - url=f"{self.server_path}api/predict", - json=dict(data), - headers={"Authorization": f"Bearer {self.access_token}"}, - cookies={"access-token": token} if token is not None else None, - client=self.queue_client, - ) - return response - - async def process_events(self, events: list[Event], batch: bool) -> None: - awake_events: list[Event] = [] - try: - for event in events: - client_awake = await self.gather_event_data(event) - if client_awake: - client_awake = await self.send_message( - event, {"msg": "process_starts"} - ) - if client_awake: - awake_events.append(event) - if not awake_events: - return - begin_time = time.time() - response = await self.call_prediction(awake_events, batch) - if response.has_exception: - for event in awake_events: - await self.send_message( - event, - { - "msg": "process_completed", - "output": {"error": str(response.exception)}, - "success": False, - }, - ) - elif response.json.get("is_generating", False): - old_response = response - while response.json.get("is_generating", False): - # Python 3.7 doesn't have named tasks. - # In order to determine if a task was cancelled, we - # ping the websocket to see if it was closed mid-iteration. - if sys.version_info < (3, 8): - is_alive = await self.send_message(event, {"msg": "alive?"}) - if not is_alive: - return - old_response = response - open_ws = [] - for event in awake_events: - open = await self.send_message( - event, - { - "msg": "process_generating", - "output": old_response.json, - "success": old_response.status == 200, - }, - ) - open_ws.append(open) - awake_events = [ - e for e, is_open in zip(awake_events, open_ws) if is_open - ] - if not awake_events: - return - response = await self.call_prediction(awake_events, batch) - for event in awake_events: - if response.status != 200: - relevant_response = response - else: - relevant_response = old_response - - await self.send_message( - event, - { - "msg": "process_completed", - "output": relevant_response.json, - "success": relevant_response.status == 200, - }, - ) - else: - output = copy.deepcopy(response.json) - for e, event in enumerate(awake_events): - if batch and "data" in output: - output["data"] = list(zip(*response.json.get("data")))[e] - await self.send_message( - event, - { - "msg": "process_completed", - "output": output, - "success": response.status == 200, - }, - ) - end_time = time.time() - if response.status == 200: - self.update_estimation(end_time - begin_time) - finally: - for event in awake_events: - try: - await event.disconnect() - except Exception: - pass - self.active_jobs[self.active_jobs.index(events)] = None - for event in events: - await self.clean_event(event) - # Always reset the state of the iterator - # If the job finished successfully, this has no effect - # If the job is cancelled, this will enable future runs - # to start "from scratch" - await self.reset_iterators(event.session_hash, event.fn_index) - - async def send_message(self, event, data: dict, timeout: float | int = 1) -> bool: - try: - await asyncio.wait_for( - event.websocket.send_json(data=data), timeout=timeout - ) - return True - except Exception: - await self.clean_event(event) - return False - - async def get_message(self, event, timeout=5) -> tuple[PredictBody | None, bool]: - try: - data = await asyncio.wait_for( - event.websocket.receive_json(), timeout=timeout - ) - return PredictBody(**data), True - except AsyncTimeOutError: - await self.clean_event(event) - return None, False - - async def reset_iterators(self, session_hash: str, fn_index: int): - await AsyncRequest( - method=AsyncRequest.Method.POST, - url=f"{self.server_path}reset", - json={ - "session_hash": session_hash, - "fn_index": fn_index, - }, - client=self.queue_client, - ) diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/utils/utils_dist.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/utils/utils_dist.py deleted file mode 100644 index 88811737a8fc7cb6e12d9226a9242dbf8391d86b..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/utils/utils_dist.py +++ /dev/null @@ -1,201 +0,0 @@ -# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/dist_utils.py # noqa: E501 -import functools -import os -import subprocess -import torch -import torch.distributed as dist -import torch.multiprocessing as mp - - -# ---------------------------------- -# init -# ---------------------------------- -def init_dist(launcher, backend='nccl', **kwargs): - if mp.get_start_method(allow_none=True) is None: - mp.set_start_method('spawn') - if launcher == 'pytorch': - _init_dist_pytorch(backend, **kwargs) - elif launcher == 'slurm': - _init_dist_slurm(backend, **kwargs) - else: - raise ValueError(f'Invalid launcher type: {launcher}') - - -def _init_dist_pytorch(backend, **kwargs): - rank = int(os.environ['RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_slurm(backend, port=None): - """Initialize slurm distributed training environment. - If argument ``port`` is not specified, then the master port will be system - environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system - environment variable, then a default port ``29500`` will be used. - Args: - backend (str): Backend of torch.distributed. - port (int, optional): Master port. Defaults to None. - """ - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(proc_id % num_gpus) - addr = subprocess.getoutput( - f'scontrol show hostname {node_list} | head -n1') - # specify master port - if port is not None: - os.environ['MASTER_PORT'] = str(port) - elif 'MASTER_PORT' in os.environ: - pass # use MASTER_PORT in the environment variable - else: - # 29500 is torch.distributed default port - os.environ['MASTER_PORT'] = '29500' - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['RANK'] = str(proc_id) - dist.init_process_group(backend=backend) - - - -# ---------------------------------- -# get rank and world_size -# ---------------------------------- -def get_dist_info(): - if dist.is_available(): - initialized = dist.is_initialized() - else: - initialized = False - if initialized: - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - return rank, world_size - - -def get_rank(): - if not dist.is_available(): - return 0 - - if not dist.is_initialized(): - return 0 - - return dist.get_rank() - - -def get_world_size(): - if not dist.is_available(): - return 1 - - if not dist.is_initialized(): - return 1 - - return dist.get_world_size() - - -def master_only(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper - - - - - - -# ---------------------------------- -# operation across ranks -# ---------------------------------- -def reduce_sum(tensor): - if not dist.is_available(): - return tensor - - if not dist.is_initialized(): - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - - return tensor - - -def gather_grad(params): - world_size = get_world_size() - - if world_size == 1: - return - - for param in params: - if param.grad is not None: - dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM) - param.grad.data.div_(world_size) - - -def all_gather(data): - world_size = get_world_size() - - if world_size == 1: - return [data] - - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to('cuda') - - local_size = torch.IntTensor([tensor.numel()]).to('cuda') - size_list = [torch.IntTensor([0]).to('cuda') for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to('cuda')) - - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to('cuda') - tensor = torch.cat((tensor, padding), 0) - - dist.all_gather(tensor_list, tensor) - - data_list = [] - - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_loss_dict(loss_dict): - world_size = get_world_size() - - if world_size < 2: - return loss_dict - - with torch.no_grad(): - keys = [] - losses = [] - - for k in sorted(loss_dict.keys()): - keys.append(k) - losses.append(loss_dict[k]) - - losses = torch.stack(losses, 0) - dist.reduce(losses, dst=0) - - if dist.get_rank() == 0: - losses /= world_size - - reduced_losses = {k: v for k, v in zip(keys, losses)} - - return reduced_losses - diff --git a/spaces/langvision/ChatWeb/_next/static/chunks/pages/_app-4e72088c2da7d84b.js b/spaces/langvision/ChatWeb/_next/static/chunks/pages/_app-4e72088c2da7d84b.js deleted file mode 100644 index ee6fa99f89f7c8a6dc745be7f7e517f99f5b1e0e..0000000000000000000000000000000000000000 --- a/spaces/langvision/ChatWeb/_next/static/chunks/pages/_app-4e72088c2da7d84b.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[888],{41597:function(n,_,u){(window.__NEXT_P=window.__NEXT_P||[]).push(["/_app",function(){return u(11767)}])}},function(n){var _=function(_){return n(n.s=_)};n.O(0,[774,179],function(){return _(41597),_(38645)}),_N_E=n.O()}]); \ No newline at end of file diff --git a/spaces/leoberniga/Write-Stories-Using-Bloom/README.md b/spaces/leoberniga/Write-Stories-Using-Bloom/README.md deleted file mode 100644 index 0d79518cc270788a63e6b27c91983be012ab6088..0000000000000000000000000000000000000000 --- a/spaces/leoberniga/Write-Stories-Using-Bloom/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Write Stories Using Bloom -emoji: 🏢 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -duplicated_from: ghosthamlet/Write-Stories-Using-Bloom ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lianglv/microsoft-resnet-50/README.md b/spaces/lianglv/microsoft-resnet-50/README.md deleted file mode 100644 index b48d31f66ed60540c92d4a94b01679ac4d6130d8..0000000000000000000000000000000000000000 --- a/spaces/lianglv/microsoft-resnet-50/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Microsoft Resnet 50 -emoji: 🚀 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/librarian-bots/tutorials/index.html b/spaces/librarian-bots/tutorials/index.html deleted file mode 100644 index eb42f27ba7c4f235bb54d5998ad21352ee4e9bd4..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/tutorials/index.html +++ /dev/null @@ -1,23 +0,0 @@ - - - - - - My static Space - - - -
            -

            This Space is used to collect tutorials on using the Hugging Face Hub.

            -

            Current tutorials:

            - -
            - - - - - - diff --git a/spaces/lightli/bingo-newbing/src/components/turn-counter.tsx b/spaces/lightli/bingo-newbing/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
            -
            - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
            -
            -
            - ) -} diff --git a/spaces/liimefruit/RVCollection/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/liimefruit/RVCollection/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index 6050505d39da0a7ffd1c99acee489d435e1c51c1..0000000000000000000000000000000000000000 --- a/spaces/liimefruit/RVCollection/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) \ No newline at end of file diff --git a/spaces/liimefruit/RVCollection/infer_pack/modules/F0Predictor/__init__.py b/spaces/liimefruit/RVCollection/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index 56a6051ca2b02b04ef92d5150c9ef600403cb1de..0000000000000000000000000000000000000000 --- a/spaces/liimefruit/RVCollection/infer_pack/modules/F0Predictor/__init__.py +++ /dev/null @@ -1 +0,0 @@ -1 \ No newline at end of file diff --git a/spaces/limingcv/AlignDet/finetune/finetune_detr_150e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0/detr_r50_8x2_150e_coco.py b/spaces/limingcv/AlignDet/finetune/finetune_detr_150e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0/detr_r50_8x2_150e_coco.py deleted file mode 100644 index a488948b4d4374622eb634ec5151602c4e3c9dfd..0000000000000000000000000000000000000000 --- a/spaces/limingcv/AlignDet/finetune/finetune_detr_150e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0/detr_r50_8x2_150e_coco.py +++ /dev/null @@ -1,280 +0,0 @@ -model = dict( - type='DETR', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(3, ), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - norm_eval=False, - style='pytorch', - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - bbox_head=dict( - type='DETRHead', - num_classes=80, - in_channels=2048, - transformer=dict( - type='Transformer', - encoder=dict( - type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=[ - dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1) - ], - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'ffn', 'norm'))), - decoder=dict( - type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'cross_attn', 'norm', - 'ffn', 'norm')))), - positional_encoding=dict( - type='SinePositionalEncoding', num_feats=128, normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0)), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.0), - reg_cost=dict(type='BBoxL1Cost', weight=5.0, box_format='xywh'), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100)) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='AutoAugment', - policies=[[{ - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }], - [{ - 'type': 'Resize', - 'img_scale': [(400, 1333), (500, 1333), (600, 1333)], - 'multiscale_mode': 'value', - 'keep_ratio': True - }, { - 'type': 'RandomCrop', - 'crop_type': 'absolute_range', - 'crop_size': (384, 600), - 'allow_negative_crop': True - }, { - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'override': - True, - 'keep_ratio': - True - }]]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_train2017.json', - img_prefix='data/coco/train2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='AutoAugment', - policies=[[{ - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }], - [{ - 'type': 'Resize', - 'img_scale': [(400, 1333), (500, 1333), - (600, 1333)], - 'multiscale_mode': 'value', - 'keep_ratio': True - }, { - 'type': 'RandomCrop', - 'crop_type': 'absolute_range', - 'crop_size': (384, 600), - 'allow_negative_crop': True - }, { - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), - (544, 1333), (576, 1333), - (608, 1333), (640, 1333), - (672, 1333), (704, 1333), - (736, 1333), (768, 1333), - (800, 1333)], - 'multiscale_mode': - 'value', - 'override': - True, - 'keep_ratio': - True - }]]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) - ]), - val=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ]), - test=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ])) -evaluation = dict( - interval=1, metric='bbox', save_best='auto', gpu_collect=True) -checkpoint_config = dict(interval=1) -log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) -custom_hooks = [ - dict(type='NumClassCheckHook'), - dict( - type='MMDetWandbHook', - init_kwargs=dict(project='I2B', group='finetune'), - interval=50, - num_eval_images=0, - log_checkpoint=False) -] -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = 'work_dirs/selfsup_detr_clusters-as-classes_add-contrastive-temp0.5-weight1.0/final_model.pth' -resume_from = None -workflow = [('train', 1)] -opencv_num_threads = 0 -mp_start_method = 'fork' -auto_scale_lr = dict(enable=False, base_batch_size=16) -custom_imports = None -norm_cfg = dict(type='SyncBN', requires_grad=True) -optimizer = dict( - type='AdamW', - lr=0.0001, - weight_decay=0.0001, - paramwise_cfg=dict( - custom_keys=dict(backbone=dict(lr_mult=0.1, decay_mult=1.0)))) -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', step=[100]) -runner = dict(type='EpochBasedRunner', max_epochs=150) -work_dir = 'work_dirs/finetune_detr_150e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0' -auto_resume = False -gpu_ids = range(0, 8) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Gogen Ta 10100 Firmware.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Gogen Ta 10100 Firmware.md deleted file mode 100644 index 67e2d160bcc71b8a6cd2fbd84bba67dc00154685..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Gogen Ta 10100 Firmware.md +++ /dev/null @@ -1,143 +0,0 @@ -
            -

            Gogen Ta 10100 Firmware: How to Download and Install the Latest Version

            - -

            If you own a Gogen Ta 10100 device, you may want to update its firmware to enjoy new features and improve its performance. Firmware is the software that controls the hardware of your device and determines how it works. Updating your firmware can fix bugs, enhance security, and add new functions to your device.

            - -

            But how can you download and install the latest Gogen Ta 10100 firmware? In this article, we will show you how to do it step by step. We will also explain what are the benefits of updating your firmware and what are the risks of not doing it. Read on to find out more.

            -

            Gogen Ta 10100 Firmware


            Download File ——— https://bytlly.com/2uGyjq



            - -

            What are the Benefits of Updating Gogen Ta 10100 Firmware?

            - -

            Updating your Gogen Ta 10100 firmware can bring you many benefits, such as:

            - -
              -
            • Improved stability and performance: Updating your firmware can fix some issues that may cause your device to freeze, crash, or slow down. It can also optimize your battery life and memory usage.
            • -
            • Enhanced features and functions: Updating your firmware can add new features and functions to your device, such as new apps, games, themes, languages, and more. It can also improve the compatibility and functionality of your existing apps and features.
            • -
            • Better security and protection: Updating your firmware can patch some vulnerabilities that may expose your device to hackers, viruses, or malware. It can also protect your data and privacy from unauthorized access or theft.
            • -
            - -

            What are the Risks of Not Updating Gogen Ta 10100 Firmware?

            - -

            Not updating your Gogen Ta 10100 firmware can expose you to some risks, such as:

            - -
              -
            • Poor performance and stability: Not updating your firmware can cause your device to run slower, freeze more often, or crash unexpectedly. It can also drain your battery faster and use more memory space.
            • -
            • Missing features and functions: Not updating your firmware can prevent you from enjoying new features and functions that are available for your device. It can also make some of your existing apps and features incompatible or dysfunctional.
            • -
            • Lower security and protection: Not updating your firmware can leave your device vulnerable to hackers, viruses, or malware that may harm your device or steal your data. It can also compromise your privacy and security settings.
            • -
            - -

            How to Download and Install Gogen Ta 10100 Firmware?

            - -

            To download and install the latest Gogen Ta 10100 firmware, you need to follow these steps:

            - -
              -
            1. Go to https://www.gogen.cz/firmware/, which is the official website of Gogen where you can find the latest firmware for your device.
            2. -
            3. Enter the serial number of your device in the search box and click on the search button. You can find the serial number on the back of your device or in the settings menu.
            4. -
            5. If there is a new firmware available for your device, you will see a link to download it. Click on the link and save the firmware file to your computer.
            6. -
            7. Connect your device to your computer using a USB cable. Make sure your device is turned on and has enough battery power.
            8. -
            9. Copy the firmware file from your computer to the root directory of your device's internal storage. Do not rename or modify the file.
            10. -
            11. Disconnect your device from your computer and turn it off.
            12. -
            13. Press and hold the power button and the volume up button at the same time until you see a recovery mode screen on your device.
            14. -
            15. Select "apply update from sdcard" using the volume buttons and confirm with the power button.
            16. -
            17. Select the firmware file that you copied to your device's internal storage using the volume buttons and confirm with the power button.
            18. -
            19. Wait for the installation process to complete. It may take several minutes.
            20. -
            21. When the installation is done, select "reboot system now" using the volume buttons and confirm with the power button.
            22. -
            23. Your device will restart with the new firmware installed. Enjoy!
            24. -
            - -

            Congratulations! You have successfully downloaded and installed the latest Gogen Ta 10100 firmware. Now you can enjoy all the benefits of updating your firmware and avoid all the risks of not doing it. If you have any questions or problems during or after the installation process, please contact Gogen customer service for assistance.

            - -

            If you liked this article, please share it with your friends and leave a comment below. And if you want more articles like this, please subscribe to our newsletter. Thank you for reading!

            -

            What are the Features of Gogen Ta 10100 Firmware?

            - -

            Gogen Ta 10100 is a device that allows you to read e-books, listen to music, watch videos, and browse the web. It has a 10.1-inch touch screen, a quad-core processor, a 16 GB internal memory, a micro SD card slot, a Wi-Fi connection, a micro USB port, and a 3.5 mm audio jack. It also supports various formats of e-books, such as EPUB, PDF, TXT, FB2, MOBI, and more.

            -

            - -

            Gogen Ta 10100 firmware is the software that controls the hardware of your device and determines how it works. The firmware has many features that make your device more user-friendly and enjoyable, such as:

            - -
              -
            • A customizable home screen with widgets and shortcuts to your favorite apps and functions.
            • -
            • A built-in dictionary that allows you to look up words and translations while reading.
            • -
            • A night mode that adjusts the brightness and contrast of the screen to reduce eye strain in low-light conditions.
            • -
            • A bookmark function that lets you save your reading progress and resume it later.
            • -
            • A text-to-speech function that reads aloud any text on the screen.
            • -
            • A file manager that helps you organize and manage your files on your device and your external memory card.
            • -
            • A browser that lets you access the internet and download e-books from various sources.
            • -
            • A media player that plays music and videos in various formats, such as MP3, WAV, WMA, OGG, FLAC, AVI, MP4, MKV, and more.
            • -
            • A gallery that displays your photos and videos in a slideshow mode.
            • -
            • A calendar that helps you keep track of your events and appointments.
            • -
            • A calculator that performs basic and advanced mathematical operations.
            • -
            • A clock that shows the date and time and has an alarm function.
            • -
            - -

            How to Troubleshoot Gogen Ta 10100 Firmware?

            - -

            Sometimes, you may encounter some problems with your Gogen Ta 10100 firmware, such as:

            - -
              -
            • Your device does not turn on or off properly.
            • -
            • Your device freezes or crashes frequently.
            • -
            • Your device does not respond to your touch commands or buttons.
            • -
            • Your device does not connect to Wi-Fi or other devices.
            • -
            • Your device does not play sound or display images correctly.
            • -
            • Your device does not recognize your external memory card or USB device.
            • -
            - -

            These problems may be caused by various factors, such as low battery power, corrupted files, incompatible apps, faulty hardware, or outdated firmware. To troubleshoot these problems, you can try the following solutions:

            - -
              -
            1. Charge your device fully or replace the battery if it is damaged or worn out.
            2. -
            3. Restart your device by pressing and holding the power button for 10 seconds or until it turns off and then turns on again.
            4. -
            5. Reset your device by inserting a pin or a needle into the reset hole on the back of your device and gently pressing it for a few seconds.
            6. -
            7. Update your firmware to the latest version by following the instructions in this article.
            8. -
            9. Delete any unnecessary files or apps that may take up too much space or cause conflicts on your device.
            10. -
            11. Format your external memory card or USB device on your computer or on another device before using it on your Gogen Ta 10100.
            12. -
            13. Contact Gogen customer service for further assistance if none of the above solutions work or if you have any other questions or issues with your device.
            14. -
            - -

            We hope this article has helped you to download and install the latest Gogen Ta 10100 firmware and to troubleshoot any problems with it. If you have any feedback or suggestions for us, please leave a comment below. And if you want more articles like this, please subscribe to our newsletter. Thank you for reading!

            -

            How to Backup and Restore Gogen Ta 10100 Firmware?

            - -

            Before you update your Gogen Ta 10100 firmware, it is recommended that you backup your device's data and settings. This way, you can restore them in case something goes wrong during or after the update process. To backup and restore your Gogen Ta 10100 firmware, you need to follow these steps:

            - -
              -
            1. Connect your device to your computer using a USB cable. Make sure your device is turned on and has enough battery power.
            2. -
            3. Open the file manager on your computer and locate your device's internal storage. You should see a folder named "backup" that contains all your device's data and settings.
            4. -
            5. Copy the "backup" folder to your computer or to an external memory device. You can also compress the folder into a ZIP file to save space.
            6. -
            7. After copying the "backup" folder, disconnect your device from your computer and proceed with the update process as described in this article.
            8. -
            9. If you need to restore your device's data and settings after the update, connect your device to your computer again using a USB cable.
            10. -
            11. Copy the "backup" folder from your computer or from your external memory device to your device's internal storage. You can also decompress the ZIP file if you compressed it before.
            12. -
            13. Restart your device by pressing and holding the power button for 10 seconds or until it turns off and then turns on again.
            14. -
            15. Your device will automatically restore your data and settings from the "backup" folder. Wait for the restoration process to complete.
            16. -
            - -

            Congratulations! You have successfully backed up and restored your Gogen Ta 10100 firmware. Now you can enjoy all the benefits of updating your firmware and avoid all the risks of losing your data and settings. If you have any questions or problems during or after the backup or restoration process, please contact Gogen customer service for assistance.

            - -

            Where to Find More Information About Gogen Ta 10100 Firmware?

            - -

            If you want to find more information about Gogen Ta 10100 firmware, such as its features, functions, specifications, user manual, FAQs, tips, tricks, and more, you can visit the following sources:

            - -
              -
            • The official website of Gogen: https://www.gogen.cz/. Here you can find all the information about Gogen products, including Gogen Ta 10100. You can also download the latest firmware, user manual, and other documents for your device.
            • -
            • The official forum of Gogen: https://www.gogen.cz/forum/. Here you can join the community of Gogen users and share your experiences, opinions, questions, and answers about Gogen products, including Gogen Ta 10100. You can also get support and feedback from other users and from Gogen staff.
            • -
            • The official YouTube channel of Gogen: https://www.youtube.com/channel/UC8n0Zy3wQz7Vq1x6N8wJ4gA. Here you can watch videos about Gogen products, including Gogen Ta 10100. You can also see demonstrations, reviews, tutorials, and tips on how to use your device.
            • -
            - -

            We hope this article has helped you to download and install the latest Gogen Ta 10100 firmware and to troubleshoot any problems with it. If you have any feedback or suggestions for us, please leave a comment below. And if you want more articles like this, please subscribe to our newsletter. Thank you for reading!

            -

            Conclusion

            - -

            Gogen Ta 10100 is a device that allows you to read e-books, listen to music, watch videos, and browse the web. It has a 10.1-inch touch screen, a quad-core processor, a 16 GB internal memory, a micro SD card slot, a Wi-Fi connection, a micro USB port, and a 3.5 mm audio jack. It also supports various formats of e-books, such as EPUB, PDF, TXT, FB2, MOBI, and more.

            - -

            Gogen Ta 10100 firmware is the software that controls the hardware of your device and determines how it works. The firmware has many features that make your device more user-friendly and enjoyable, such as a customizable home screen, a built-in dictionary, a night mode, a bookmark function, a text-to-speech function, a file manager, a browser, a media player, a gallery, a calendar, a calculator, and a clock.

            - -

            To download and install the latest Gogen Ta 10100 firmware, you need to go to the official website of Gogen and enter the serial number of your device. Then you need to download the firmware file to your computer and copy it to your device's internal storage. After that, you need to turn off your device and press the power button and the volume up button at the same time until you see a recovery mode screen. Then you need to select "apply update from sdcard" and choose the firmware file that you copied to your device. Finally, you need to reboot your device and enjoy the new firmware.

            - -

            Before you update your firmware, it is recommended that you backup your device's data and settings. This way, you can restore them in case something goes wrong during or after the update process. To backup and restore your device's data and settings, you need to connect your device to your computer using a USB cable and copy the "backup" folder from your device's internal storage to your computer or to an external memory device. Then you need to copy the "backup" folder back to your device's internal storage after the update and restart your device.

            - -

            If you encounter any problems with your firmware, such as poor performance, missing features, lower security, or compatibility issues, you can try some solutions, such as charging your device fully or replacing the battery if it is damaged or worn out; restarting your device by pressing and holding the power button for 10 seconds or until it turns off and then turns on again; resetting your device by inserting a pin or a needle into the reset hole on the back of your device and gently pressing it for a few seconds; updating your firmware to the latest version by following the instructions in this article; deleting any unnecessary files or apps that may take up too much space or cause conflicts on your device; formatting your external memory card or USB device on your computer or on another device before using it on your Gogen Ta 10100; contacting Gogen customer service for further assistance if none of the above solutions work or if you have any other questions or issues with your device.

            - -

            If you want to find more information about Gogen Ta 10100 firmware, such as its features, functions, specifications, user manual, FAQs, tips, tricks, and more, you can visit the official website of Gogen; the official forum of Gogen; or the official YouTube channel of Gogen.

            - -

            We hope this article has helped you to download and install the latest Gogen Ta 10100 firmware and to troubleshoot any problems with it. If you liked this article, please share it with your friends and leave a comment below. And if you want more articles like this, please subscribe to our newsletter. Thank you for reading!

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Kenwood Tk2312 Programming Software Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Kenwood Tk2312 Programming Software Download.md deleted file mode 100644 index a3493606bba0d974edf65fddab6dca947f085ea2..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Kenwood Tk2312 Programming Software Download.md +++ /dev/null @@ -1,31 +0,0 @@ - -

            How to Download and Install the Kenwood Tk2312 Programming Software

            -

            The Kenwood Tk2312 is a compact and rugged VHF/UHF radio that offers reliable performance and advanced features. To program the radio, you need to download and install the Kenwood Tk2312 Programming Software (KPG-134DK) on your computer. Here are the steps to do so:

            -
              -
            1. Go to the Kenwood website[^1^] and select your country to access the support page for the Tk2312 radio.
            2. -
            3. Click on the "Download" tab and find the KPG-134DK software under the "Manual" section. You can also use this direct link[^2^] to download the software.
            4. -
            5. Save the zip file on your computer and extract it to a folder. You will need a password to unzip the file. Contact your Kenwood dealer or sales representative to obtain the password.
            6. -
            7. Run the setup.exe file and follow the instructions to install the software on your computer. You may need to restart your computer after the installation.
            8. -
            9. Connect your Tk2312 radio to your computer using a programming cable (KPG-22UM or SPUPLUS). Make sure the radio is turned off before connecting it.
            10. -
            11. Launch the KPG-134DK software and select the COM port that corresponds to your programming cable. You can check this in the Device Manager of your computer.
            12. -
            13. Turn on your Tk2312 radio and click on the "Read" button in the software to read the current settings of the radio.
            14. -
            15. Modify the settings as you wish using the software. You can change various parameters such as frequencies, channels, tones, power levels, scan lists, etc.
            16. -
            17. Click on the "Write" button in the software to write the new settings to the radio. Wait for the process to complete and then disconnect the radio from your computer.
            18. -
            -

            Congratulations! You have successfully downloaded and installed the Kenwood Tk2312 Programming Software and programmed your radio. Enjoy using your radio with its enhanced features and functions.

            -

            Kenwood Tk2312 Programming Software Download


            Download File ——— https://bytlly.com/2uGvyV



            - -

            If you want to learn more about the Kenwood Tk2312 radio and its features, you can refer to the instruction manuals that are also available on the Kenwood website. You can download the manuals in different languages and formats. The manuals will provide you with detailed information on how to operate, maintain, and troubleshoot your radio.

            -

            The Kenwood Tk2312 radio is compatible with various accessories that can enhance its functionality and usability. You can find a list of compatible accessories on the Kenwood website or contact your Kenwood dealer or sales representative for more information. Some of the accessories include:

            -

            -
              -
            • Batteries: You can choose from different types and capacities of batteries for your radio. The standard battery is the KNB-65L Li-Ion battery that provides 1520 mAh of power. You can also use the optional KNB-63L Li-Ion battery that provides 1130 mAh of power.
            • -
            • Chargers: You can charge your radio using the KSC-35SK fast charger that can fully charge your battery in about three hours. You can also use the optional KSC-35S slow charger that can fully charge your battery in about six hours.
            • -
            • Antennas: You can replace the standard antenna of your radio with an optional antenna that suits your needs. For example, you can use the KRA-26M2 VHF helical antenna or the KRA-27M2 UHF helical antenna for better performance and durability.
            • -
            • Speaker microphones: You can use a speaker microphone to communicate hands-free with your radio. For example, you can use the KMC-45D speaker microphone that has a 2.5 mm earphone jack and a noise-canceling feature.
            • -
            • Earpieces: You can use an earpiece to listen to the radio discreetly and clearly. For example, you can use the KHS-1 earbud with PTT and VOX or the KHS-10-OH heavy-duty headset with boom mic and PTT.
            • -
            • Programming cables: You can use a programming cable to connect your radio to your computer for programming. As mentioned earlier, you can use the KPG-22UM USB programming cable or the SPUPLUS universal interface programming cable with RS-232 and USB ports.
            • -
            -

            We hope this article has helped you to download and install the Kenwood Tk2312 Programming Software and program your radio. If you have any questions or issues, please contact your Kenwood dealer or sales representative for assistance. Thank you for choosing Kenwood products.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_new.py b/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_new.py deleted file mode 100644 index c9898f63e3f320597b96c45a3df22d941e467614..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_new.py +++ /dev/null @@ -1,132 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from uvr5_pack.lib_v5 import layers_new as layers - - -class BaseNet(nn.Module): - def __init__( - self, nin, nout, nin_lstm, nout_lstm, dilations=((4, 2), (8, 4), (12, 6)) - ): - super(BaseNet, self).__init__() - self.enc1 = layers.Conv2DBNActiv(nin, nout, 3, 1, 1) - self.enc2 = layers.Encoder(nout, nout * 2, 3, 2, 1) - self.enc3 = layers.Encoder(nout * 2, nout * 4, 3, 2, 1) - self.enc4 = layers.Encoder(nout * 4, nout * 6, 3, 2, 1) - self.enc5 = layers.Encoder(nout * 6, nout * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(nout * 8, nout * 8, dilations, dropout=True) - - self.dec4 = layers.Decoder(nout * (6 + 8), nout * 6, 3, 1, 1) - self.dec3 = layers.Decoder(nout * (4 + 6), nout * 4, 3, 1, 1) - self.dec2 = layers.Decoder(nout * (2 + 4), nout * 2, 3, 1, 1) - self.lstm_dec2 = layers.LSTMModule(nout * 2, nin_lstm, nout_lstm) - self.dec1 = layers.Decoder(nout * (1 + 2) + 1, nout * 1, 3, 1, 1) - - def __call__(self, x): - e1 = self.enc1(x) - e2 = self.enc2(e1) - e3 = self.enc3(e2) - e4 = self.enc4(e3) - e5 = self.enc5(e4) - - h = self.aspp(e5) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = torch.cat([h, self.lstm_dec2(h)], dim=1) - h = self.dec1(h, e1) - - return h - - -class CascadedNet(nn.Module): - def __init__(self, n_fft, nout=32, nout_lstm=128): - super(CascadedNet, self).__init__() - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - self.nin_lstm = self.max_bin // 2 - self.offset = 64 - - self.stg1_low_band_net = nn.Sequential( - BaseNet(2, nout // 2, self.nin_lstm // 2, nout_lstm), - layers.Conv2DBNActiv(nout // 2, nout // 4, 1, 1, 0), - ) - - self.stg1_high_band_net = BaseNet( - 2, nout // 4, self.nin_lstm // 2, nout_lstm // 2 - ) - - self.stg2_low_band_net = nn.Sequential( - BaseNet(nout // 4 + 2, nout, self.nin_lstm // 2, nout_lstm), - layers.Conv2DBNActiv(nout, nout // 2, 1, 1, 0), - ) - self.stg2_high_band_net = BaseNet( - nout // 4 + 2, nout // 2, self.nin_lstm // 2, nout_lstm // 2 - ) - - self.stg3_full_band_net = BaseNet( - 3 * nout // 4 + 2, nout, self.nin_lstm, nout_lstm - ) - - self.out = nn.Conv2d(nout, 2, 1, bias=False) - self.aux_out = nn.Conv2d(3 * nout // 4, 2, 1, bias=False) - - def forward(self, x): - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - l1_in = x[:, :, :bandw] - h1_in = x[:, :, bandw:] - l1 = self.stg1_low_band_net(l1_in) - h1 = self.stg1_high_band_net(h1_in) - aux1 = torch.cat([l1, h1], dim=2) - - l2_in = torch.cat([l1_in, l1], dim=1) - h2_in = torch.cat([h1_in, h1], dim=1) - l2 = self.stg2_low_band_net(l2_in) - h2 = self.stg2_high_band_net(h2_in) - aux2 = torch.cat([l2, h2], dim=2) - - f3_in = torch.cat([x, aux1, aux2], dim=1) - f3 = self.stg3_full_band_net(f3_in) - - mask = torch.sigmoid(self.out(f3)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux = torch.cat([aux1, aux2], dim=1) - aux = torch.sigmoid(self.aux_out(aux)) - aux = F.pad( - input=aux, - pad=(0, 0, 0, self.output_bin - aux.size()[2]), - mode="replicate", - ) - return mask, aux - else: - return mask - - def predict_mask(self, x): - mask = self.forward(x) - - if self.offset > 0: - mask = mask[:, :, :, self.offset : -self.offset] - assert mask.size()[3] > 0 - - return mask - - def predict(self, x, aggressiveness=None): - mask = self.forward(x) - pred_mag = x * mask - - if self.offset > 0: - pred_mag = pred_mag[:, :, :, self.offset : -self.offset] - assert pred_mag.size()[3] > 0 - - return pred_mag diff --git a/spaces/ljjggr/bingo/src/components/toaster.tsx b/spaces/ljjggr/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/ljjggr/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/ltgoslo/ssa-perin/data/field/edge_field.py b/spaces/ltgoslo/ssa-perin/data/field/edge_field.py deleted file mode 100644 index c8085bb8ac13c20eb751eab8dab44b73417943ab..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/data/field/edge_field.py +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env python3 -# coding=utf-8 - -import torch -from data.field.mini_torchtext.field import RawField -from data.field.mini_torchtext.vocab import Vocab -from collections import Counter -import types - - -class EdgeField(RawField): - def __init__(self): - super(EdgeField, self).__init__() - self.vocab = None - - def process(self, edges, device=None): - edges = self.numericalize(edges) - tensor = self.pad(edges, device) - return tensor - - def pad(self, edges, device): - tensor = torch.zeros(edges[0], edges[1], dtype=torch.long, device=device) - for edge in edges[-1]: - tensor[edge[0], edge[1]] = edge[2] - - return tensor - - def numericalize(self, arr): - def multi_map(array, function): - if isinstance(array, tuple): - return (array[0], array[1], function(array[2])) - elif isinstance(array, list): - return [multi_map(array[i], function) for i in range(len(array))] - else: - return array - - if self.vocab is not None: - arr = multi_map(arr, lambda x: self.vocab.stoi[x] if x is not None else 0) - return arr - - def build_vocab(self, *args): - def generate(l): - if isinstance(l, tuple): - yield l[2] - elif isinstance(l, list) or isinstance(l, types.GeneratorType): - for i in l: - yield from generate(i) - else: - return - - counter = Counter() - sources = [] - for arg in args: - if isinstance(arg, torch.utils.data.Dataset): - sources += [arg.get_examples(name) for name, field in arg.fields.items() if field is self] - else: - sources.append(arg) - - for x in generate(sources): - if x is not None: - counter.update([x]) - - self.vocab = Vocab(counter, specials=[]) diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/stl.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/stl.h deleted file mode 100644 index 6c2bebda87f1c703888307c5b4bac277655b52d6..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/stl.h +++ /dev/null @@ -1,388 +0,0 @@ -/* - pybind11/stl.h: Transparent conversion for STL data types - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "pybind11.h" -#include -#include -#include -#include -#include -#include -#include -#include - -#if defined(_MSC_VER) -#pragma warning(push) -#pragma warning(disable: 4127) // warning C4127: Conditional expression is constant -#endif - -#ifdef __has_include -// std::optional (but including it in c++14 mode isn't allowed) -# if defined(PYBIND11_CPP17) && __has_include() -# include -# define PYBIND11_HAS_OPTIONAL 1 -# endif -// std::experimental::optional (but not allowed in c++11 mode) -# if defined(PYBIND11_CPP14) && (__has_include() && \ - !__has_include()) -# include -# define PYBIND11_HAS_EXP_OPTIONAL 1 -# endif -// std::variant -# if defined(PYBIND11_CPP17) && __has_include() -# include -# define PYBIND11_HAS_VARIANT 1 -# endif -#elif defined(_MSC_VER) && defined(PYBIND11_CPP17) -# include -# include -# define PYBIND11_HAS_OPTIONAL 1 -# define PYBIND11_HAS_VARIANT 1 -#endif - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) -PYBIND11_NAMESPACE_BEGIN(detail) - -/// Extracts an const lvalue reference or rvalue reference for U based on the type of T (e.g. for -/// forwarding a container element). Typically used indirect via forwarded_type(), below. -template -using forwarded_type = conditional_t< - std::is_lvalue_reference::value, remove_reference_t &, remove_reference_t &&>; - -/// Forwards a value U as rvalue or lvalue according to whether T is rvalue or lvalue; typically -/// used for forwarding a container's elements. -template -forwarded_type forward_like(U &&u) { - return std::forward>(std::forward(u)); -} - -template struct set_caster { - using type = Type; - using key_conv = make_caster; - - bool load(handle src, bool convert) { - if (!isinstance(src)) - return false; - auto s = reinterpret_borrow(src); - value.clear(); - for (auto entry : s) { - key_conv conv; - if (!conv.load(entry, convert)) - return false; - value.insert(cast_op(std::move(conv))); - } - return true; - } - - template - static handle cast(T &&src, return_value_policy policy, handle parent) { - if (!std::is_lvalue_reference::value) - policy = return_value_policy_override::policy(policy); - pybind11::set s; - for (auto &&value : src) { - auto value_ = reinterpret_steal(key_conv::cast(forward_like(value), policy, parent)); - if (!value_ || !s.add(value_)) - return handle(); - } - return s.release(); - } - - PYBIND11_TYPE_CASTER(type, _("Set[") + key_conv::name + _("]")); -}; - -template struct map_caster { - using key_conv = make_caster; - using value_conv = make_caster; - - bool load(handle src, bool convert) { - if (!isinstance(src)) - return false; - auto d = reinterpret_borrow(src); - value.clear(); - for (auto it : d) { - key_conv kconv; - value_conv vconv; - if (!kconv.load(it.first.ptr(), convert) || - !vconv.load(it.second.ptr(), convert)) - return false; - value.emplace(cast_op(std::move(kconv)), cast_op(std::move(vconv))); - } - return true; - } - - template - static handle cast(T &&src, return_value_policy policy, handle parent) { - dict d; - return_value_policy policy_key = policy; - return_value_policy policy_value = policy; - if (!std::is_lvalue_reference::value) { - policy_key = return_value_policy_override::policy(policy_key); - policy_value = return_value_policy_override::policy(policy_value); - } - for (auto &&kv : src) { - auto key = reinterpret_steal(key_conv::cast(forward_like(kv.first), policy_key, parent)); - auto value = reinterpret_steal(value_conv::cast(forward_like(kv.second), policy_value, parent)); - if (!key || !value) - return handle(); - d[key] = value; - } - return d.release(); - } - - PYBIND11_TYPE_CASTER(Type, _("Dict[") + key_conv::name + _(", ") + value_conv::name + _("]")); -}; - -template struct list_caster { - using value_conv = make_caster; - - bool load(handle src, bool convert) { - if (!isinstance(src) || isinstance(src)) - return false; - auto s = reinterpret_borrow(src); - value.clear(); - reserve_maybe(s, &value); - for (auto it : s) { - value_conv conv; - if (!conv.load(it, convert)) - return false; - value.push_back(cast_op(std::move(conv))); - } - return true; - } - -private: - template ().reserve(0)), void>::value, int> = 0> - void reserve_maybe(sequence s, Type *) { value.reserve(s.size()); } - void reserve_maybe(sequence, void *) { } - -public: - template - static handle cast(T &&src, return_value_policy policy, handle parent) { - if (!std::is_lvalue_reference::value) - policy = return_value_policy_override::policy(policy); - list l(src.size()); - size_t index = 0; - for (auto &&value : src) { - auto value_ = reinterpret_steal(value_conv::cast(forward_like(value), policy, parent)); - if (!value_) - return handle(); - PyList_SET_ITEM(l.ptr(), (ssize_t) index++, value_.release().ptr()); // steals a reference - } - return l.release(); - } - - PYBIND11_TYPE_CASTER(Type, _("List[") + value_conv::name + _("]")); -}; - -template struct type_caster> - : list_caster, Type> { }; - -template struct type_caster> - : list_caster, Type> { }; - -template struct type_caster> - : list_caster, Type> { }; - -template struct array_caster { - using value_conv = make_caster; - -private: - template - bool require_size(enable_if_t size) { - if (value.size() != size) - value.resize(size); - return true; - } - template - bool require_size(enable_if_t size) { - return size == Size; - } - -public: - bool load(handle src, bool convert) { - if (!isinstance(src)) - return false; - auto l = reinterpret_borrow(src); - if (!require_size(l.size())) - return false; - size_t ctr = 0; - for (auto it : l) { - value_conv conv; - if (!conv.load(it, convert)) - return false; - value[ctr++] = cast_op(std::move(conv)); - } - return true; - } - - template - static handle cast(T &&src, return_value_policy policy, handle parent) { - list l(src.size()); - size_t index = 0; - for (auto &&value : src) { - auto value_ = reinterpret_steal(value_conv::cast(forward_like(value), policy, parent)); - if (!value_) - return handle(); - PyList_SET_ITEM(l.ptr(), (ssize_t) index++, value_.release().ptr()); // steals a reference - } - return l.release(); - } - - PYBIND11_TYPE_CASTER(ArrayType, _("List[") + value_conv::name + _(_(""), _("[") + _() + _("]")) + _("]")); -}; - -template struct type_caster> - : array_caster, Type, false, Size> { }; - -template struct type_caster> - : array_caster, Type, true> { }; - -template struct type_caster> - : set_caster, Key> { }; - -template struct type_caster> - : set_caster, Key> { }; - -template struct type_caster> - : map_caster, Key, Value> { }; - -template struct type_caster> - : map_caster, Key, Value> { }; - -// This type caster is intended to be used for std::optional and std::experimental::optional -template struct optional_caster { - using value_conv = make_caster; - - template - static handle cast(T_ &&src, return_value_policy policy, handle parent) { - if (!src) - return none().inc_ref(); - if (!std::is_lvalue_reference::value) { - policy = return_value_policy_override::policy(policy); - } - return value_conv::cast(*std::forward(src), policy, parent); - } - - bool load(handle src, bool convert) { - if (!src) { - return false; - } else if (src.is_none()) { - return true; // default-constructed value is already empty - } - value_conv inner_caster; - if (!inner_caster.load(src, convert)) - return false; - - value.emplace(cast_op(std::move(inner_caster))); - return true; - } - - PYBIND11_TYPE_CASTER(T, _("Optional[") + value_conv::name + _("]")); -}; - -#if PYBIND11_HAS_OPTIONAL -template struct type_caster> - : public optional_caster> {}; - -template<> struct type_caster - : public void_caster {}; -#endif - -#if PYBIND11_HAS_EXP_OPTIONAL -template struct type_caster> - : public optional_caster> {}; - -template<> struct type_caster - : public void_caster {}; -#endif - -/// Visit a variant and cast any found type to Python -struct variant_caster_visitor { - return_value_policy policy; - handle parent; - - using result_type = handle; // required by boost::variant in C++11 - - template - result_type operator()(T &&src) const { - return make_caster::cast(std::forward(src), policy, parent); - } -}; - -/// Helper class which abstracts away variant's `visit` function. `std::variant` and similar -/// `namespace::variant` types which provide a `namespace::visit()` function are handled here -/// automatically using argument-dependent lookup. Users can provide specializations for other -/// variant-like classes, e.g. `boost::variant` and `boost::apply_visitor`. -template class Variant> -struct visit_helper { - template - static auto call(Args &&...args) -> decltype(visit(std::forward(args)...)) { - return visit(std::forward(args)...); - } -}; - -/// Generic variant caster -template struct variant_caster; - -template class V, typename... Ts> -struct variant_caster> { - static_assert(sizeof...(Ts) > 0, "Variant must consist of at least one alternative."); - - template - bool load_alternative(handle src, bool convert, type_list) { - auto caster = make_caster(); - if (caster.load(src, convert)) { - value = cast_op(caster); - return true; - } - return load_alternative(src, convert, type_list{}); - } - - bool load_alternative(handle, bool, type_list<>) { return false; } - - bool load(handle src, bool convert) { - // Do a first pass without conversions to improve constructor resolution. - // E.g. `py::int_(1).cast>()` needs to fill the `int` - // slot of the variant. Without two-pass loading `double` would be filled - // because it appears first and a conversion is possible. - if (convert && load_alternative(src, false, type_list{})) - return true; - return load_alternative(src, convert, type_list{}); - } - - template - static handle cast(Variant &&src, return_value_policy policy, handle parent) { - return visit_helper::call(variant_caster_visitor{policy, parent}, - std::forward(src)); - } - - using Type = V; - PYBIND11_TYPE_CASTER(Type, _("Union[") + detail::concat(make_caster::name...) + _("]")); -}; - -#if PYBIND11_HAS_VARIANT -template -struct type_caster> : variant_caster> { }; -#endif - -PYBIND11_NAMESPACE_END(detail) - -inline std::ostream &operator<<(std::ostream &os, const handle &obj) { - os << (std::string) str(obj); - return os; -} - -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) - -#if defined(_MSC_VER) -#pragma warning(pop) -#endif diff --git a/spaces/maher13/arabic-asr/app.py b/spaces/maher13/arabic-asr/app.py deleted file mode 100644 index a6b084720b3e1e72317889fa1956ef3ff694df34..0000000000000000000000000000000000000000 --- a/spaces/maher13/arabic-asr/app.py +++ /dev/null @@ -1,64 +0,0 @@ -from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC -import soundfile as sf -import torch -import gradio as gr -import torchaudio - -# load model and processor -processor = Wav2Vec2Processor.from_pretrained("maher13/arabic-iti") -model = Wav2Vec2ForCTC.from_pretrained("maher13/arabic-iti").eval() -# define function to read in sound file -def map_to_array(file): - speech, sr = torchaudio.load(file) - if sr != 16000: - transform = torchaudio.transforms.Resample(orig_freq=sr, - new_freq=16000) - speech= transform(speech) - speech = speech[0] - speech = speech.numpy() - - return speech - -# tokenize -def inference(audio_file, audio_file2): - if audio_file: - input_values = processor(map_to_array(audio_file.name), return_tensors="pt", padding="longest").input_values # Batch size 1 - logits = model(input_values).logits - - with torch.no_grad(): - predicted_ids = torch.argmax(logits, dim=-1) - predicted_ids[predicted_ids == -100] = processor.tokenizer.pad_token_id - transcription1 = processor.tokenizer.batch_decode(predicted_ids)[0] - else: - transcription1 = "N/A" - - if audio_file2: - input_values = processor(map_to_array(audio_file2.name), return_tensors="pt", padding="longest").input_values # Batch size 1 - logits = model(input_values).logits - - with torch.no_grad(): - predicted_ids = torch.argmax(logits, dim=-1) - predicted_ids[predicted_ids == -100] = processor.tokenizer.pad_token_id - transcription2 = processor.tokenizer.batch_decode(predicted_ids)[0] - else : - transcription2 = "N/A" - - - return transcription1, transcription2 - - -gradio_ui = gr.Interface( - fn=inference, - title="Speech to Text Graduation project \n sponsored by TensorGraph", - inputs= - [ - gr.inputs.Audio(source = 'microphone', type="file", optional = True), - gr.inputs.Audio(source = 'upload', type="file", optional = True) - ], - outputs=[ - gr.outputs.Textbox(label="Auto-Transcript"), - gr.outputs.Textbox(label="Auto-Transcript") - ], -) - -gradio_ui.launch(share=True) \ No newline at end of file diff --git a/spaces/matthoffner/starchat-ui/components/Promptbar/components/PromptbarSettings.tsx b/spaces/matthoffner/starchat-ui/components/Promptbar/components/PromptbarSettings.tsx deleted file mode 100644 index 5fad6f9ca3d08ccb3ce1cdaf53d23632534a1632..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/components/Promptbar/components/PromptbarSettings.tsx +++ /dev/null @@ -1,7 +0,0 @@ -import { FC } from 'react'; - -interface Props {} - -export const PromptbarSettings: FC = () => { - return
            ; -}; diff --git a/spaces/menghanxia/ReversibleHalftoning/model/base_module.py b/spaces/menghanxia/ReversibleHalftoning/model/base_module.py deleted file mode 100644 index f538e2496da1ecb3d7f7eced5de67b54ac4eb300..0000000000000000000000000000000000000000 --- a/spaces/menghanxia/ReversibleHalftoning/model/base_module.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch.nn as nn -from torch.nn import functional as F -import torch -import numpy as np - -def tensor2array(tensors): - arrays = tensors.detach().to("cpu").numpy() - return np.transpose(arrays, (0, 2, 3, 1)) - - -class ResidualBlock(nn.Module): - def __init__(self, channels): - super(ResidualBlock, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(channels, channels, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(channels, channels, kernel_size=3, padding=1) - ) - - def forward(self, x): - residual = self.conv(x) - return x + residual - - -class DownsampleBlock(nn.Module): - def __init__(self, in_channels, out_channels, withConvRelu=True): - super(DownsampleBlock, self).__init__() - if withConvRelu: - self.conv = nn.Sequential( - nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=2), - nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1), - nn.ReLU(inplace=True) - ) - else: - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=2) - - def forward(self, x): - return self.conv(x) - - -class ConvBlock(nn.Module): - def __init__(self, inChannels, outChannels, convNum): - super(ConvBlock, self).__init__() - self.inConv = nn.Sequential( - nn.Conv2d(inChannels, outChannels, kernel_size=3, padding=1), - nn.ReLU(inplace=True) - ) - layers = [] - for _ in range(convNum - 1): - layers.append(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1)) - layers.append(nn.ReLU(inplace=True)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - x = self.inConv(x) - x = self.conv(x) - return x - - -class UpsampleBlock(nn.Module): - def __init__(self, in_channels, out_channels): - super(UpsampleBlock, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=1), - nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1), - nn.ReLU(inplace=True) - ) - - def forward(self, x): - x = F.interpolate(x, scale_factor=2, mode='nearest') - return self.conv(x) - - -class SkipConnection(nn.Module): - def __init__(self, channels): - super(SkipConnection, self).__init__() - self.conv = nn.Conv2d(2 * channels, channels, 1, bias=False) - - def forward(self, x, y): - x = torch.cat((x, y), 1) - return self.conv(x) \ No newline at end of file diff --git a/spaces/merve/data-leak/source/_posts/2021-10-31-uncertainty-calibration.md b/spaces/merve/data-leak/source/_posts/2021-10-31-uncertainty-calibration.md deleted file mode 100644 index 0e097d412fff555af6b338ffa6d704d4ba05a454..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/_posts/2021-10-31-uncertainty-calibration.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -template: post.html -title: Are Model Predictions Probabilities? -socialsummary: Machine learning models express their uncertainty as model scores, but through calibration we can transform these scores into probabilities for more effective decision making. -shareimg: https://pair.withgoogle.com/explorables/images/uncertainty-calibration.png -shareimgabstract: https://pair.withgoogle.com/explorables/images/uncertainty-calibration-abstract.png -permalink: /uncertainty-calibration/ ---- - -
            -
            -
            - -
            - -If a machine learning model tells you that it’s going to rain tomorrow with a score of 0.60, should you buy an umbrella?1 - -

            In the diagram, we have a hypothetical machine learning classifier for predicting rainy days. For each date, the classifier reads in relevant signals like temperature and humidity and spits out a number between 0 and 1. Each data point represents a different day, with the position representing the model’s prediction for rain that day and the symbol (🌧️ or ☀️) representing the true weather that occurred that day. - -

            Do the model’s predictions tell us the probability of rain?
            - -

            In general, machine learning classifiers don’t just give binary predictions, but instead provide some numerical value between 0 and 1 for their predictions. This number, sometimes called the *model score* or *confidence*, is a way for the model to express their certainty about what class the input data belongs to. In most applications, the exact score is ignored and we use a threshold to round the score to a binary answer, yes or no, rain or not. However, by using *calibration* we can transform these scores into probabilities and use them more effectively in decision making. - -

            - -

            Thresholding

            - -

            One traditional approach to using a model’s score is through *thresholding*. In this setting, you choose a threshold *t* and then declare that the model thinks it’s going to rain if the score is above *t* and it’s not if the score is below, thereby converting the score to a binary outcome. When you observe the actual weather, you know how often it was wrong and can compute key aggregate statistics like *accuracy*. - -

            We can sometimes treat these aggregate statistics themselves as probabilities. For example, accuracy is the probability that the binary prediction of your model (rain or not) is equal to the ground truth (🌧️ or ☀️). -

            - -

            Adjustable Thresholding

            - -

            The threshold can easily be changed after the model is trained. - -

            Thresholding uses the model’s score to make a decision, but fails to consider the model’s confidence. The model score is only used to decide whether you are above or below the threshold, but the magnitude of the difference isn’t considered. For example, if you threshold at 0.4, the model’s predictions of 0.6 and 0.9 are treated the same, even though the model is much more confident in the latter. - -

            Can we do a better job of incorporating the model score into our understanding of the model?
            - -
            - -

            Calibration

            - -

            *Calibration* lets us compare our model scores directly to probabilities. - -

            For this technique, instead of one threshold, we have many, which we use to split the predictions into buckets. Again, once we observe the ground truth, we can see what proportion of the predictions in each bucket were rainy days (🌧️). This proportion is the *empirical probability* of rain for that bucket. - -

            Ideally, we want this proportion to be higher for higher buckets, so that the probability is roughly in line with the average prediction for that bucket. We call the difference between the proportion and the predicted rates the calibration error, and by averaging over all of the buckets, we can calculate the Expected Calibration Error. If the proportions and the predictions line up for our use case, meaning the error is low, then we say the model is “well-calibrated” and we can consider treating the model score as the probability that it will actually rain. -

            - -

            Adjusting Calibration

            - -

            We saw above that a well-calibrated model allows us to treat our model score as a kind of probability. But if we start with a poorly calibrated model, one which is over or under-confident. Is there anything we can do to improve it? - -

            It turns out that, in many settings, we can adjust the model score without really changing the model’s decisions, as long as our adjustment preserves the order of the scores2. For example, if we map all of the scores from our original model to their squares, we don’t change the order of the data with respect to the model score. Thus, quantities like accuracy will stay the same as long as we appropriately map the threshold to its square as well. However, these adjustments *do* change the calibration of a model by changing which data points lie in which buckets. - -

            **Try** **tweaking the thresholds** to *calibrate* the model scores for our data3 – how much can you improve the model's calibration?
            - -

            In general, we don’t have to rely on tweaking the model scores by hand to improve calibration. If we are trying to calibrate the model for a particular data distribution, we can use mathematical techniques like Isotonic Regression or Platt Scaling to generate the correct remapping for model scores. -

            - -

            Shifting Data

            - -

            While good calibration is an important property for a model’s scores to be interpreted as probabilities, it alone does not capture all aspects of model uncertainty. - -

            What happens if it starts to rain less frequently after we've trained and calibrated our model? Notice how the calibration drops, even if we use the same calibrated model scores as before. - -

            Models are usually only well calibrated with respect to certain data distributions. If the data changes significantly between training and serving time, our models might cease to be well calibrated and we can’t rely on using our model scores as probabilities. -

            - -

            Beyond Calibration

            - -

            Calibration can sometimes be easy to game. For example, if we knew that it rains 50% of the time over the course of the year, then we could create a model with a constant prediction of 0.5 every day. This would have perfect calibration, despite not being a very useful model for distinguishing day-to-day differences in the probability of rain. This highlights an important issue: - -

            Better calibration doesn’t mean more accurate predictions.
            - -

            It turns out that statisticians identified the issue with focusing solely on calibration in meteorology when comparing weather forecasts, and came up with a solution. Proper scoring rules provide an alternative approach to measuring the quality of probabilistic forecasts, by using a formula to measure the distance between the model’s predictions and the true event probabilities. These rules guarantee that a better value must mean a better prediction in terms of accuracy and calibration. Such rules incentivize models to be both better calibrated and more accurate. - -

            -
            -
            - - -

            More Reading

            - -

            This post is only the beginning of the discussion on the connections between machine learning models, probability, and uncertainty. In practice, when developing machine learning models with uncertainty in mind, we may need to go beyond calibration. - -

            In some settings, errors are not all equal. For example, if we are training a classifier to predict if a patient needs to be tested for a disease, then a false negative (missing a case of the disease) may be more detrimental than a false positive (accidentally having a patient tested). In such cases, we may not want a perfectly calibrated model, but may want to skew the model scores towards one class or another. The field of Statistical Decision Theory provides us with tools to determine how to better use model scores in this more general setting. Calibration may also lead to tension with other important goals like model fairness in some applications. - -

            Beyond this, so far we’ve only considered the case of using a single model score, i.e. a point estimate. If we trained the model a thousand times with different random seeds, or resampled the training data, we would almost certainly generate a collection of different model scores for a given input. To truly unpack the different sources of uncertainty that we might encounter, we might want to look towards *distributional* approaches to measuring uncertainty, using techniques like Deep Ensembles or Bayesian modeling. We will dig deeper into these in future posts. - -

            Credits

            - -

            Nithum Thain, Adam Pearce, Jasper Snoek & Mahima Pushkarna // March 2022 - -

            Thanks to Balaji Lakshminarayanan, Emily Reif, Lucas Dixon, Martin Wattenberg, Fernanda Viégas, Ian Kivlichan, Nicole Mitchell, and Meredith Morris for their help with this piece. - -

            Footnotes

            - -

            Your decision might depend both on the probability of rain and its severity (i.e. how much rain there is going to be). We’ll focus just on the probability for now. - -

            Applying a strictly monotonic function to the model always keeps the order of scores the same. - -

            In this example, we adjust the model scores by changing the model scores of elements within a bucket to the mean of the bucket. -

            More Explorables

            - -

            - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/data-leak/source/third_party/topojson-client.js b/spaces/merve/data-leak/source/third_party/topojson-client.js deleted file mode 100644 index 728070f185d11aa72b3f78ab88037275614fe89b..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/third_party/topojson-client.js +++ /dev/null @@ -1,2 +0,0 @@ -// https://github.com/topojson/topojson-client v3.0.1 Copyright 2019 Mike Bostock -!function(e,r){"object"==typeof exports&&"undefined"!=typeof module?r(exports):"function"==typeof define&&define.amd?define(["exports"],r):r((e=e||self).topojson=e.topojson||{})}(this,function(e){"use strict";function r(e){return e}function t(e){if(null==e)return r;var t,n,o=e.scale[0],a=e.scale[1],i=e.translate[0],c=e.translate[1];return function(e,r){r||(t=n=0);var u=2,f=e.length,s=new Array(f);for(s[0]=(t+=e[0])*o+i,s[1]=(n+=e[1])*a+c;ui&&(i=e[0]),e[1]c&&(c=e[1])}function f(e){switch(e.type){case"GeometryCollection":e.geometries.forEach(f);break;case"Point":u(e.coordinates);break;case"MultiPoint":e.coordinates.forEach(u)}}for(r in e.arcs.forEach(function(e){for(var r,t=-1,u=e.length;++ti&&(i=r[0]),r[1]c&&(c=r[1])}),e.objects)f(e.objects[r]);return[o,a,i,c]}function o(e,r){var t=r.id,n=r.bbox,o=null==r.properties?{}:r.properties,i=a(e,r);return null==t&&null==n?{type:"Feature",properties:o,geometry:i}:null==n?{type:"Feature",id:t,properties:o,geometry:i}:{type:"Feature",id:t,bbox:n,properties:o,geometry:i}}function a(e,r){var n=t(e.transform),o=e.arcs;function a(e,r){r.length&&r.pop();for(var t=o[e<0?~e:e],a=0,i=t.length;a1)n=function(e,r,t){var n,o=[],a=[];function i(e){var r=e<0?~e:e;(a[r]||(a[r]=[])).push({i:e,g:n})}function c(e){e.forEach(i)}function u(e){e.forEach(c)}return function e(r){switch(n=r,r.type){case"GeometryCollection":r.geometries.forEach(e);break;case"LineString":c(r.arcs);break;case"MultiLineString":case"Polygon":u(r.arcs);break;case"MultiPolygon":!function(e){e.forEach(u)}(r.arcs)}}(r),a.forEach(null==t?function(e){o.push(e[0].i)}:function(e){t(e[0].g,e[e.length-1].g)&&o.push(e[0].i)}),o}(0,r,t);else for(o=0,n=new Array(a=e.arcs.length);o1)for(var a,c,f=1,s=u(o[0]);fs&&(c=o[0],o[0]=o[f],o[f]=c,s=a);return o}).filter(function(e){return e.length>0})}}function f(e,r){for(var t=0,n=e.length;t>>1;e[o]=2))throw new Error("n must be ≥2");var t,o=(u=e.bbox||n(e))[0],a=u[1],i=u[2],c=u[3];r={scale:[i-o?(i-o)/(t-1):1,c-a?(c-a)/(t-1):1],translate:[o,a]}}var u,f,l=s(r),h=e.objects,p={};function g(e){return l(e)}function y(e){var r;switch(e.type){case"GeometryCollection":r={type:"GeometryCollection",geometries:e.geometries.map(y)};break;case"Point":r={type:"Point",coordinates:g(e.coordinates)};break;case"MultiPoint":r={type:"MultiPoint",coordinates:e.coordinates.map(g)};break;default:return e}return null!=e.id&&(r.id=e.id),null!=e.bbox&&(r.bbox=e.bbox),null!=e.properties&&(r.properties=e.properties),r}for(f in h)p[f]=y(h[f]);return{type:"Topology",bbox:u,transform:r,objects:p,arcs:e.arcs.map(function(e){var r,t=0,n=1,o=e.length,a=new Array(o);for(a[0]=l(e[0],0);++t { - if (err){ - console.log(err) - return check() - } - - if (nextStr == lastStr) return - lastStr = nextStr - - if (path.includes('.js')){ - console.clear() - console.log('js', new Date()) - - Function(nextStr.replace('\n', ';').replace('\n', ';'))() - } - - if (path.includes('.css')){ - console.log('css', new Date()) - - Array.from(document.querySelectorAll('link')) - .filter(d => d.href.includes(path)) - .forEach(d => d.href = d.href.split('?')[0] + '?' + Math.random()) - } - }) - - setTimeout(check, window.timeoutMS || 9999999999) - } - check() -} - - -watchFile('https://roadtolarissa.com/colab/gender-over-time-colab/style.css', 'js') -watchFile('https://roadtolarissa.com/colab/gender-over-time-colab/script.js', 'js') diff --git a/spaces/merve/fill-in-the-blank/source/private-and-fair/style.css b/spaces/merve/fill-in-the-blank/source/private-and-fair/style.css deleted file mode 100644 index 420336c2e0c31186e29779935402929f9275b845..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/private-and-fair/style.css +++ /dev/null @@ -1,307 +0,0 @@ -html{ - min-width: 830px; - overflow-x: auto; -} - -.highlight-yellow{ - margin-top: -30px; - margin-bottom: 20px; -} -.highlight-yellow a{ - background: yellow; - padding: 5px; -} - -.tooltip{ - width: 112px; -} - -.tooltip-footnote { - top: -1000px; - position: absolute; - padding: 10px; - background: rgba(255, 255, 255, .8); - border: 0px solid lightgray; - - width: 300px !important; - font-size: 14px; - line-height: 1.4em; - background: rgba(0, 0, 0, .8); - color: #fff; - pointer-events: all !important; -} -.tooltip-footnote a{ - color: #fff !important; - -} -.tooltip-footnote:hover{ -/* opacity: 1; - pointer-events: all !important; -*/} - -.tooltip-footnote-hidden{ - opacity: 0; - transition: opacity .3s; - transition-delay: .2s; - pointer-events: none !important; -} - -.tooltip-hidden{ - pointer-events: none !important; -} - -@media (max-width: 590px){ - .footend{ - margin-left: 0px; - width: 10px; - } - - - div.tooltip-footnote{ - transition: all 0s !important; - transition-delay: 0s !important; - - display: none; - position: fixed; - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -.footstart{ - padding-left: 2px; - height: 8px !important; - /*background: red;*/ - /*display: inline-block;*/ - line-height: 0em; -} - - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -circle.point{ - stroke: #000; - stroke-width: .5; - fill-opacity: .5; - cursor: pointer; -} - -circle.point.swapped{ - stroke-width: 2; -} - -path.boundry-line{ - pointer-events: none; - opacity: .1; -} - -.dragging{ - cursor: pointer; -} - -.sliders{ - position: relative; - top: 10px; - padding-top: 5px; -} - -.slider-container{ - height: 30px; -} - -.graph{ - width: 900px; -} - - -.chart-title{ - font-size: 14px; - font-weight: 600; - text-align: center; - margin-top: 25px; - /*padding-top: 5px;*/ -} - -.epoch-graph{ - max-width: 700px; - margin: 0px auto; -} - -.decision-boundry{ - max-width: 320px; - margin: 0px auto; -} - - - -.digit-button-container{ - max-width: 400px; - margin: 0px auto; - display: flex; - gap: 10px; -} - - -.button{ - text-align: center; - flex-grow: 1; - flex-basis: 0; - padding: 5px; - cursor: pointer; - user-select: none; - - outline: 1px solid #ccc; - - position: relative; -} - -@media (hover: hover) and (pointer: fine) { - .button:hover{ - /*border-color: #000;*/ - /*border-left-width: 1px;*/ - outline: 1px solid #000 !important; - z-index: 100; - } -} - - -.button.active{ - background: #000; - color: #fff; - outline: 0px; - /*font-weight: 500;*/ -} - - -.button-row > div{ - display: inline-block; -} - -.accuracy-line{ - stroke: #888; -} -.accuracy-line.active{ - stroke-width: 3px; - stroke: #000; - /*stroke: rgb(219, 61, 17);*/ -} - -.accuracy-circle{ - fill: #888; - /*opacity: .5;*/ -} -.accuracy-circle text{ - pointer-events: none; -} -.accuracy-circle.active{ - opacity: 1; - fill: #000; - - /*fill: rgb(219, 61, 17);*/ -} - -.accuracy-label.active text{ - font-weight: 600 !important; -} - -.digit-button-container{ - margin-bottom: 30px; -} - - - -.slider-native { - -webkit-appearance: none; - /*width: 100%;*/ - width: 180px; - height: 15px; - background: #d3d3d3; - outline: none; - -webkit-transition: .2s; - transition: opacity .2s; - position: relative; - left: 1em; - top: 2px; -} - -.slider-native::-webkit-slider-thumb { - -webkit-appearance: none; - appearance: none; - width: 30px; - height: 30px; - border-radius: 50%; - background: #000; - cursor: pointer; -} -.slider-native:hover { - opacity: 1; -} - -svg{ - user-select: none; -} - - -.axis .tick text{ - fill: #555; -} - -.annotation{ - font-size: 12px; -} - - - -ul{ - margin-top: -1em; - list-style: none; - -} - -li{ - margin-left: 10px; -} - - - -.info-box .post:hover .img{ - outline: 1px solid #333 !important; -} -.info-box .post:hover .title{ - text-decoration: underline !important; -} - -.post-summary{ - display: none; -} - - -.x .tick.active path{ - stroke: rgba(255,255,0,.5) !important; - stroke-width: 9; -} - - -.active circle{ - stroke-width: 2; - stroke: #000; -} - -.accuracy-rect.active rect:first-child{ - stroke: yellow !important; - fill: #ccc !important; - fill-opacity: 1; - stroke-width: 5; - paint-order: stroke; - -} \ No newline at end of file diff --git a/spaces/merve/hidden-bias/source/_posts/2019-10-02-bias.html b/spaces/merve/hidden-bias/source/_posts/2019-10-02-bias.html deleted file mode 100644 index 44c586c9489408fa9694149309ffefa3f3fc4d1b..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/_posts/2019-10-02-bias.html +++ /dev/null @@ -1,126 +0,0 @@ - ---- -template: post.html -title: Hidden Bias -summary: Models trained on real-world data can encode real-world bias. Hiding information about protected classes doesn't always fix things — sometimes it can even hurt. -permalink: /hidden-bias/ -shareimg: https://pair.withgoogle.com/explorables/images/hidden-bias.png -date: 2020-05-01 - ---- - - -
            -
            -
            - - -
            -

            Modeling College GPA

            - -

            Let's pretend we're college admissions officers trying to predict the GPA students will have in college (in these examples we'll use simulated data). - -

            One simple approach: predict that students will have the same GPA in college as they did in high school. -

            - - -
            -

            This is at best a very rough approximation, and it misses a key feature of this data set: students usually have better grades in high school than in college - -

            We're over-predicting college grades more often than we under-predict. -

            - - -
            -

            Predicting with ML

            -

            If we switched to using a machine learning model and entered these student grades, it would recognize this pattern and adjust the prediction. - -

            The model does this without knowing anything about the real-life context of grading in high school versus college. -

            - - -
            -

            Giving the model more information about students increases accuracy more... -

            - - -
            -

            ...and more. -

            - - -
            -

            Models can encode previous bias

            -

            All of this sensitive information about students is just a long list of numbers to model. - -

            If a sexist college culture has historically led to lower grades for   female students, the model will pick up on that correlation and predict lower grades for women. - -

            Training on historical data bakes in historical biases. Here the sexist culture has improved, but the model learned from the past correlation and still predicts higher grades for   men. -

            - -
            -

            Hiding protected classes from the model might not stop discrimination

            - -

            Even if we don't tell the model students' genders, it might still score   female students poorly. - -

            With detailed enough information about every student, the model can still synthesize a proxy for gender out of other variables. -

            - - -
            -

            Including a protected attribute may even decrease discrimination

            - -

            Let's look at a simplified model, one only taking into account the recommendation of an alumni interviewer. -

            - - -
            -

            The interviewer is quite accurate, except that they're biased against students with a   low household income. - -

            In our toy model, students' grades don't depend on their income once they're in college. In other words, we have biased inputs and unbiased outcomes—the opposite of the previous example, where the inputs weren't biased, but the toxic culture biased the outcomes. -

            - - -
            -

            If we also tell the model each student's household income, it will naturally correct for the interviewer's overrating of   high-income students just like it corrected for the difference between high school and college GPAs. - -

            By carefully considering and accounting for bias, we've made the model fairer and more accurate. This isn't always easy to do, especially in circumstances like the historically toxic college culture where unbiased data is limited. - -

            And there are fundamental fairness trade-offs that have to be made. Check out the Measuring Fairness explorable to see how those tradeoffs work.
            - - -

            - -

            Adam Pearce // May 2020 - -

            Thanks to Carey Radebaugh, Dan Nanas, David Weinberger, Emily Denton, Emily Reif, Fernanda Viégas, Hal Abelson, James Wexler, Kristen Olson, Lucas Dixon, Mahima Pushkarna, Martin Wattenberg, Michael Terry, Rebecca Salois, Timnit Gebru, Tulsee Doshi, Yannick Assogba, Yoni Halpern, Zan Armstrong, and my other colleagues at Google for their help with this piece. -

            - -
            -
            -
            - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/hidden-bias/source/third_party/mobilenet@1.0.0.js b/spaces/merve/hidden-bias/source/third_party/mobilenet@1.0.0.js deleted file mode 100644 index d50ffe68663e1aabfc07faec02e8a3cb41b5dfe5..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/third_party/mobilenet@1.0.0.js +++ /dev/null @@ -1,2 +0,0 @@ -// @tensorflow/tfjs-models Copyright 2019 Google -!function(e,a){"object"==typeof exports&&"undefined"!=typeof module?a(exports,require("@tensorflow/tfjs")):"function"==typeof define&&define.amd?define(["exports","@tensorflow/tfjs"],a):a((e=e||self).mobilenet={},e.tf)}(this,function(e,a){"use strict";function r(e,a,r,o){return new(r||(r=Promise))(function(i,t){function n(e){try{l(o.next(e))}catch(e){t(e)}}function s(e){try{l(o.throw(e))}catch(e){t(e)}}function l(e){e.done?i(e.value):new r(function(a){a(e.value)}).then(n,s)}l((o=o.apply(e,a||[])).next())})}function o(e,a){var r,o,i,t,n={label:0,sent:function(){if(1&i[0])throw i[1];return i[1]},trys:[],ops:[]};return t={next:s(0),throw:s(1),return:s(2)},"function"==typeof Symbol&&(t[Symbol.iterator]=function(){return this}),t;function s(t){return function(s){return function(t){if(r)throw new TypeError("Generator is already executing.");for(;n;)try{if(r=1,o&&(i=2&t[0]?o.return:t[0]?o.throw||((i=o.return)&&i.call(o),0):o.next)&&!(i=i.call(o,t[1])).done)return i;switch(o=0,i&&(t=[2&t[0],i.value]),t[0]){case 0:case 1:i=t;break;case 4:return n.label++,{value:t[1],done:!1};case 5:n.label++,o=t[1],t=[0];continue;case 7:t=n.ops.pop(),n.trys.pop();continue;default:if(!(i=(i=n.trys).length>0&&i[i.length-1])&&(6===t[0]||2===t[0])){n=0;continue}if(3===t[0]&&(!i||t[1]>i[0]&&t[1] tag, please also include @tensorflow/tfjs on the page before using this model.");if(r=e.toFixed(2),t=i.toFixed(2),!(r in n))throw new Error("Invalid version of MobileNet. Valid versions are: "+Object.keys(n));if(!(t in n[r]))throw new Error("MobileNet constructed with invalid alpha "+i+". Valid multipliers for this version are: "+Object.keys(n[r])+".");return[4,(l=new s(r,t)).load()];case 1:return o.sent(),[2,l]}})})},e.MobileNet=s,Object.defineProperty(e,"__esModule",{value:!0})}); \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py deleted file mode 100644 index 2a287a4e97c66acbd36897b25f2ece5494005f03..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -import time -import torch -import sys -import subprocess - -argslist = list(sys.argv)[1:] -log_dir = argslist[-1] -num_gpus = torch.cuda.device_count() -argslist.append('--n_gpus={}'.format(num_gpus)) -workers = [] -job_id = time.strftime("%Y_%m_%d-%H%M%S") -argslist.append("--group_name=group_{}".format(job_id)) - -print("GPU log directory is {}".format(log_dir)) -os.makedirs(log_dir, exist_ok=True) -for i in range(num_gpus): - argslist.append('--rank={}'.format(i)) - stdout = None if i == 0 else open("{}/{}_GPU_{}.log".format(log_dir, job_id, i), - "w") - print(argslist) - p = subprocess.Popen([str(sys.executable)]+argslist, stdout=stdout) - workers.append(p) - argslist = argslist[:-1] - -for p in workers: - p.wait() diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/text_to_speech/tts_transformer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/text_to_speech/tts_transformer.py deleted file mode 100644 index ff7af78bd49708cc5429cd3d481d3866b4612779..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/text_to_speech/tts_transformer.py +++ /dev/null @@ -1,371 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Optional - -import torch -from torch import nn - -from fairseq.models import (FairseqEncoder, FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, register_model, - register_model_architecture) -from fairseq.modules import ( - TransformerEncoderLayer, TransformerDecoderLayer -) -from fairseq.models.text_to_speech.tacotron2 import Prenet, Postnet -from fairseq.modules import LayerNorm, PositionalEmbedding, FairseqDropout -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq import utils - -logger = logging.getLogger(__name__) - - -def encoder_init(m): - if isinstance(m, nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("relu")) - - -def Embedding(num_embeddings, embedding_dim): - m = nn.Embedding(num_embeddings, embedding_dim) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - return m - - -class TTSTransformerEncoder(FairseqEncoder): - def __init__(self, args, src_dict, embed_speaker): - super().__init__(src_dict) - self.padding_idx = src_dict.pad() - self.embed_speaker = embed_speaker - self.spk_emb_proj = None - if embed_speaker is not None: - self.spk_emb_proj = nn.Linear( - args.encoder_embed_dim + args.speaker_embed_dim, - args.encoder_embed_dim - ) - - self.dropout_module = FairseqDropout( - p=args.dropout, module_name=self.__class__.__name__ - ) - self.embed_tokens = nn.Embedding(len(src_dict), args.encoder_embed_dim, - padding_idx=self.padding_idx) - assert(args.encoder_conv_kernel_size % 2 == 1) - self.prenet = nn.ModuleList( - nn.Sequential( - nn.Conv1d(args.encoder_embed_dim, args.encoder_embed_dim, - kernel_size=args.encoder_conv_kernel_size, - padding=((args.encoder_conv_kernel_size - 1) // 2)), - nn.BatchNorm1d(args.encoder_embed_dim), - nn.ReLU(), - nn.Dropout(args.encoder_dropout), - ) - for _ in range(args.encoder_conv_layers) - ) - self.prenet_proj = nn.Linear( - args.encoder_embed_dim, args.encoder_embed_dim - ) - self.embed_positions = PositionalEmbedding( - args.max_source_positions, args.encoder_embed_dim, self.padding_idx - ) - self.pos_emb_alpha = nn.Parameter(torch.ones(1)) - - self.transformer_layers = nn.ModuleList( - TransformerEncoderLayer(args) - for _ in range(args.encoder_transformer_layers) - ) - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(args.encoder_embed_dim) - else: - self.layer_norm = None - - self.apply(encoder_init) - - def forward(self, src_tokens, src_lengths=None, speaker=None, **kwargs): - x = self.embed_tokens(src_tokens) - x = x.transpose(1, 2).contiguous() # B x T x C -> B x C x T - for conv in self.prenet: - x = conv(x) - x = x.transpose(1, 2).contiguous() # B x C x T -> B x T x C - x = self.prenet_proj(x) - - padding_mask = src_tokens.eq(self.padding_idx) - positions = self.embed_positions(padding_mask) - x += self.pos_emb_alpha * positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - for layer in self.transformer_layers: - x = layer(x, padding_mask) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - if self.embed_speaker is not None: - seq_len, bsz, _ = x.size() - emb = self.embed_speaker(speaker).transpose(0, 1) - emb = emb.expand(seq_len, bsz, -1) - x = self.spk_emb_proj(torch.cat([x, emb], dim=2)) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [padding_mask] if padding_mask.any() else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": [], # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - -def decoder_init(m): - if isinstance(m, torch.nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("tanh")) - - -class TTSTransformerDecoder(FairseqIncrementalDecoder): - def __init__(self, args, src_dict): - super().__init__(None) - self._future_mask = torch.empty(0) - - self.args = args - self.padding_idx = src_dict.pad() - self.n_frames_per_step = args.n_frames_per_step - self.out_dim = args.output_frame_dim * args.n_frames_per_step - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.embed_positions = PositionalEmbedding( - args.max_target_positions, args.decoder_embed_dim, self.padding_idx - ) - self.pos_emb_alpha = nn.Parameter(torch.ones(1)) - self.prenet = nn.Sequential( - Prenet(self.out_dim, args.prenet_layers, args.prenet_dim, - args.prenet_dropout), - nn.Linear(args.prenet_dim, args.decoder_embed_dim), - ) - - self.n_transformer_layers = args.decoder_transformer_layers - self.transformer_layers = nn.ModuleList( - TransformerDecoderLayer(args) - for _ in range(self.n_transformer_layers) - ) - if args.decoder_normalize_before: - self.layer_norm = LayerNorm(args.decoder_embed_dim) - else: - self.layer_norm = None - - self.feat_proj = nn.Linear(args.decoder_embed_dim, self.out_dim) - self.eos_proj = nn.Linear(args.decoder_embed_dim, 1) - - self.postnet = Postnet(self.out_dim, args.postnet_conv_dim, - args.postnet_conv_kernel_size, - args.postnet_layers, args.postnet_dropout) - - self.ctc_proj = None - if getattr(args, "ctc_weight", 0.) > 0.: - self.ctc_proj = nn.Linear(self.out_dim, len(src_dict)) - - self.apply(decoder_init) - - def extract_features( - self, prev_outputs, encoder_out=None, incremental_state=None, - target_lengths=None, speaker=None, **kwargs - ): - alignment_layer = self.n_transformer_layers - 1 - self_attn_padding_mask = lengths_to_padding_mask(target_lengths) - positions = self.embed_positions( - self_attn_padding_mask, incremental_state=incremental_state - ) - - if incremental_state is not None: - prev_outputs = prev_outputs[:, -1:, :] - self_attn_padding_mask = self_attn_padding_mask[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - x = self.prenet(prev_outputs) - x += self.pos_emb_alpha * positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - if not self_attn_padding_mask.any(): - self_attn_padding_mask = None - - attn: Optional[torch.Tensor] = None - inner_states: List[Optional[torch.Tensor]] = [x] - for idx, transformer_layer in enumerate(self.transformer_layers): - if incremental_state is None: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - x, layer_attn, _ = transformer_layer( - x, - encoder_out["encoder_out"][0] - if (encoder_out is not None and len(encoder_out["encoder_out"]) > 0) - else None, - encoder_out["encoder_padding_mask"][0] - if ( - encoder_out is not None - and len(encoder_out["encoder_padding_mask"]) > 0 - ) - else None, - incremental_state, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - need_attn=bool((idx == alignment_layer)), - need_head_weights=bool((idx == alignment_layer)), - ) - inner_states.append(x) - if layer_attn is not None and idx == alignment_layer: - attn = layer_attn.float().to(x) - - if attn is not None: - # average probabilities over heads, transpose to - # (B, src_len, tgt_len) - attn = attn.mean(dim=0).transpose(2, 1) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, {"attn": attn, "inner_states": inner_states} - - def forward(self, prev_output_tokens, encoder_out=None, - incremental_state=None, target_lengths=None, speaker=None, - **kwargs): - x, extra = self.extract_features( - prev_output_tokens, encoder_out=encoder_out, - incremental_state=incremental_state, target_lengths=target_lengths, - speaker=speaker, **kwargs - ) - attn = extra["attn"] - feat_out = self.feat_proj(x) - bsz, seq_len, _ = x.size() - eos_out = self.eos_proj(x) - post_feat_out = feat_out + self.postnet(feat_out) - return post_feat_out, eos_out, {"attn": attn, "feature_out": feat_out} - - def get_normalized_probs(self, net_output, log_probs, sample): - logits = self.ctc_proj(net_output[2]["feature_out"]) - if log_probs: - return utils.log_softmax(logits.float(), dim=-1) - else: - return utils.softmax(logits.float(), dim=-1) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - # self._future_mask.device != tensor.device is not working in TorchScript. This is a workaround. - if ( - self._future_mask.size(0) == 0 - or (not self._future_mask.device == tensor.device) - or self._future_mask.size(0) < dim - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(torch.zeros([dim, dim])), 1 - ) - self._future_mask = self._future_mask.to(tensor) - return self._future_mask[:dim, :dim] - - -@register_model("tts_transformer") -class TTSTransformerModel(FairseqEncoderDecoderModel): - """ - Implementation for https://arxiv.org/pdf/1809.08895.pdf - """ - - @staticmethod - def add_args(parser): - parser.add_argument("--dropout", type=float) - parser.add_argument("--output-frame-dim", type=int) - parser.add_argument("--speaker-embed-dim", type=int) - # encoder prenet - parser.add_argument("--encoder-dropout", type=float) - parser.add_argument("--encoder-conv-layers", type=int) - parser.add_argument("--encoder-conv-kernel-size", type=int) - # encoder transformer layers - parser.add_argument("--encoder-transformer-layers", type=int) - parser.add_argument("--encoder-embed-dim", type=int) - parser.add_argument("--encoder-ffn-embed-dim", type=int) - parser.add_argument("--encoder-normalize-before", action="store_true") - parser.add_argument("--encoder-attention-heads", type=int) - parser.add_argument("--attention-dropout", type=float) - parser.add_argument("--activation-dropout", "--relu-dropout", type=float) - parser.add_argument("--activation-fn", type=str, default="relu") - # decoder prenet - parser.add_argument("--prenet-dropout", type=float) - parser.add_argument("--prenet-layers", type=int) - parser.add_argument("--prenet-dim", type=int) - # decoder postnet - parser.add_argument("--postnet-dropout", type=float) - parser.add_argument("--postnet-layers", type=int) - parser.add_argument("--postnet-conv-dim", type=int) - parser.add_argument("--postnet-conv-kernel-size", type=int) - # decoder transformer layers - parser.add_argument("--decoder-transformer-layers", type=int) - parser.add_argument("--decoder-embed-dim", type=int) - parser.add_argument("--decoder-ffn-embed-dim", type=int) - parser.add_argument("--decoder-normalize-before", action="store_true") - parser.add_argument("--decoder-attention-heads", type=int) - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._num_updates = 0 - - @classmethod - def build_model(cls, args, task): - embed_speaker = task.get_speaker_embeddings(args) - encoder = TTSTransformerEncoder(args, task.src_dict, embed_speaker) - decoder = TTSTransformerDecoder(args, task.src_dict) - return cls(encoder, decoder) - - def forward_encoder(self, src_tokens, src_lengths, speaker=None, **kwargs): - return self.encoder(src_tokens, src_lengths=src_lengths, - speaker=speaker, **kwargs) - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self._num_updates = num_updates - - -@register_model_architecture("tts_transformer", "tts_transformer") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.output_frame_dim = getattr(args, "output_frame_dim", 80) - args.speaker_embed_dim = getattr(args, "speaker_embed_dim", 64) - # encoder prenet - args.encoder_dropout = getattr(args, "encoder_dropout", 0.5) - args.encoder_conv_layers = getattr(args, "encoder_conv_layers", 3) - args.encoder_conv_kernel_size = getattr(args, "encoder_conv_kernel_size", 5) - # encoder transformer layers - args.encoder_transformer_layers = getattr(args, "encoder_transformer_layers", 6) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * args.encoder_embed_dim) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - # decoder prenet - args.prenet_dropout = getattr(args, "prenet_dropout", 0.5) - args.prenet_layers = getattr(args, "prenet_layers", 2) - args.prenet_dim = getattr(args, "prenet_dim", 256) - # decoder postnet - args.postnet_dropout = getattr(args, "postnet_dropout", 0.5) - args.postnet_layers = getattr(args, "postnet_layers", 5) - args.postnet_conv_dim = getattr(args, "postnet_conv_dim", 512) - args.postnet_conv_kernel_size = getattr(args, "postnet_conv_kernel_size", 5) - # decoder transformer layers - args.decoder_transformer_layers = getattr(args, "decoder_transformer_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4 * args.decoder_embed_dim) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/lightconv_layer/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/lightconv_layer/__init__.py deleted file mode 100644 index 3b2a99c1227f827768911e5e22e79f6865ffbfd3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/lightconv_layer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .lightconv_layer import LightconvLayer # noqa diff --git a/spaces/multimodalart/lora-roulette/README.md b/spaces/multimodalart/lora-roulette/README.md deleted file mode 100644 index ee352ca8e251dabc10c597d7bdda82749ff7fc17..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/lora-roulette/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LoRA Roulette -emoji: 🎲 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 4.0.2 -app_file: app.py -fullWidth: true -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nazianafis/Extract-Tables-From-PDF/README.md b/spaces/nazianafis/Extract-Tables-From-PDF/README.md deleted file mode 100644 index 53f1313c7eb15b6020f2dd1feffe5fb9d701ea10..0000000000000000000000000000000000000000 --- a/spaces/nazianafis/Extract-Tables-From-PDF/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Extract Tables From PDF -emoji: 📚 -colorFrom: gray -colorTo: gray -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. \ No newline at end of file diff --git a/spaces/nbeuchat/actors_matching/actors_matching/__init__.py b/spaces/nbeuchat/actors_matching/actors_matching/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/anchor_generator.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/anchor_generator.py deleted file mode 100644 index ac94e72396ba61778c102133218bb5defe5b4413..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/anchor_generator.py +++ /dev/null @@ -1,386 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import collections -import math -from typing import List -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, move_device_like -from detectron2.structures import Boxes, RotatedBoxes -from detectron2.utils.registry import Registry - -ANCHOR_GENERATOR_REGISTRY = Registry("ANCHOR_GENERATOR") -ANCHOR_GENERATOR_REGISTRY.__doc__ = """ -Registry for modules that creates object detection anchors for feature maps. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -class BufferList(nn.Module): - """ - Similar to nn.ParameterList, but for buffers - """ - - def __init__(self, buffers): - super().__init__() - for i, buffer in enumerate(buffers): - # Use non-persistent buffer so the values are not saved in checkpoint - self.register_buffer(str(i), buffer, persistent=False) - - def __len__(self): - return len(self._buffers) - - def __iter__(self): - return iter(self._buffers.values()) - - -def _create_grid_offsets( - size: List[int], stride: int, offset: float, target_device_tensor: torch.Tensor -): - grid_height, grid_width = size - shifts_x = move_device_like( - torch.arange(offset * stride, grid_width * stride, step=stride, dtype=torch.float32), - target_device_tensor, - ) - shifts_y = move_device_like( - torch.arange(offset * stride, grid_height * stride, step=stride, dtype=torch.float32), - target_device_tensor, - ) - - shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x) - shift_x = shift_x.reshape(-1) - shift_y = shift_y.reshape(-1) - return shift_x, shift_y - - -def _broadcast_params(params, num_features, name): - """ - If one size (or aspect ratio) is specified and there are multiple feature - maps, we "broadcast" anchors of that single size (or aspect ratio) - over all feature maps. - - If params is list[float], or list[list[float]] with len(params) == 1, repeat - it num_features time. - - Returns: - list[list[float]]: param for each feature - """ - assert isinstance( - params, collections.abc.Sequence - ), f"{name} in anchor generator has to be a list! Got {params}." - assert len(params), f"{name} in anchor generator cannot be empty!" - if not isinstance(params[0], collections.abc.Sequence): # params is list[float] - return [params] * num_features - if len(params) == 1: - return list(params) * num_features - assert len(params) == num_features, ( - f"Got {name} of length {len(params)} in anchor generator, " - f"but the number of input features is {num_features}!" - ) - return params - - -@ANCHOR_GENERATOR_REGISTRY.register() -class DefaultAnchorGenerator(nn.Module): - """ - Compute anchors in the standard ways described in - "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". - """ - - box_dim: torch.jit.Final[int] = 4 - """ - the dimension of each anchor box. - """ - - @configurable - def __init__(self, *, sizes, aspect_ratios, strides, offset=0.5): - """ - This interface is experimental. - - Args: - sizes (list[list[float]] or list[float]): - If ``sizes`` is list[list[float]], ``sizes[i]`` is the list of anchor sizes - (i.e. sqrt of anchor area) to use for the i-th feature map. - If ``sizes`` is list[float], ``sizes`` is used for all feature maps. - Anchor sizes are given in absolute lengths in units of - the input image; they do not dynamically scale if the input image size changes. - aspect_ratios (list[list[float]] or list[float]): list of aspect ratios - (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. - strides (list[int]): stride of each input feature. - offset (float): Relative offset between the center of the first anchor and the top-left - corner of the image. Value has to be in [0, 1). - Recommend to use 0.5, which means half stride. - """ - super().__init__() - - self.strides = strides - self.num_features = len(self.strides) - sizes = _broadcast_params(sizes, self.num_features, "sizes") - aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") - self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios) - - self.offset = offset - assert 0.0 <= self.offset < 1.0, self.offset - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - return { - "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, - "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, - "strides": [x.stride for x in input_shape], - "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, - } - - def _calculate_anchors(self, sizes, aspect_ratios): - cell_anchors = [ - self.generate_cell_anchors(s, a).float() for s, a in zip(sizes, aspect_ratios) - ] - return BufferList(cell_anchors) - - @property - @torch.jit.unused - def num_cell_anchors(self): - """ - Alias of `num_anchors`. - """ - return self.num_anchors - - @property - @torch.jit.unused - def num_anchors(self): - """ - Returns: - list[int]: Each int is the number of anchors at every pixel - location, on that feature map. - For example, if at every pixel we use anchors of 3 aspect - ratios and 5 sizes, the number of anchors is 15. - (See also ANCHOR_GENERATOR.SIZES and ANCHOR_GENERATOR.ASPECT_RATIOS in config) - - In standard RPN models, `num_anchors` on every feature map is the same. - """ - return [len(cell_anchors) for cell_anchors in self.cell_anchors] - - def _grid_anchors(self, grid_sizes: List[List[int]]): - """ - Returns: - list[Tensor]: #featuremap tensors, each is (#locations x #cell_anchors) x 4 - """ - anchors = [] - # buffers() not supported by torchscript. use named_buffers() instead - buffers: List[torch.Tensor] = [x[1] for x in self.cell_anchors.named_buffers()] - for size, stride, base_anchors in zip(grid_sizes, self.strides, buffers): - shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors) - shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1) - - anchors.append((shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4)) - - return anchors - - def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)): - """ - Generate a tensor storing canonical anchor boxes, which are all anchor - boxes of different sizes and aspect_ratios centered at (0, 0). - We can later build the set of anchors for a full feature map by - shifting and tiling these tensors (see `meth:_grid_anchors`). - - Args: - sizes (tuple[float]): - aspect_ratios (tuple[float]]): - - Returns: - Tensor of shape (len(sizes) * len(aspect_ratios), 4) storing anchor boxes - in XYXY format. - """ - - # This is different from the anchor generator defined in the original Faster R-CNN - # code or Detectron. They yield the same AP, however the old version defines cell - # anchors in a less natural way with a shift relative to the feature grid and - # quantization that results in slightly different sizes for different aspect ratios. - # See also https://github.com/facebookresearch/Detectron/issues/227 - - anchors = [] - for size in sizes: - area = size**2.0 - for aspect_ratio in aspect_ratios: - # s * s = w * h - # a = h / w - # ... some algebra ... - # w = sqrt(s * s / a) - # h = a * w - w = math.sqrt(area / aspect_ratio) - h = aspect_ratio * w - x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0 - anchors.append([x0, y0, x1, y1]) - return torch.tensor(anchors) - - def forward(self, features: List[torch.Tensor]): - """ - Args: - features (list[Tensor]): list of backbone feature maps on which to generate anchors. - - Returns: - list[Boxes]: a list of Boxes containing all the anchors for each feature map - (i.e. the cell anchors repeated over all locations in the feature map). - The number of anchors of each feature map is Hi x Wi x num_cell_anchors, - where Hi, Wi are resolution of the feature map divided by anchor stride. - """ - grid_sizes = [feature_map.shape[-2:] for feature_map in features] - anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) - return [Boxes(x) for x in anchors_over_all_feature_maps] - - -@ANCHOR_GENERATOR_REGISTRY.register() -class RotatedAnchorGenerator(nn.Module): - """ - Compute rotated anchors used by Rotated RPN (RRPN), described in - "Arbitrary-Oriented Scene Text Detection via Rotation Proposals". - """ - - box_dim: int = 5 - """ - the dimension of each anchor box. - """ - - @configurable - def __init__(self, *, sizes, aspect_ratios, strides, angles, offset=0.5): - """ - This interface is experimental. - - Args: - sizes (list[list[float]] or list[float]): - If sizes is list[list[float]], sizes[i] is the list of anchor sizes - (i.e. sqrt of anchor area) to use for the i-th feature map. - If sizes is list[float], the sizes are used for all feature maps. - Anchor sizes are given in absolute lengths in units of - the input image; they do not dynamically scale if the input image size changes. - aspect_ratios (list[list[float]] or list[float]): list of aspect ratios - (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. - strides (list[int]): stride of each input feature. - angles (list[list[float]] or list[float]): list of angles (in degrees CCW) - to use for anchors. Same "broadcast" rule for `sizes` applies. - offset (float): Relative offset between the center of the first anchor and the top-left - corner of the image. Value has to be in [0, 1). - Recommend to use 0.5, which means half stride. - """ - super().__init__() - - self.strides = strides - self.num_features = len(self.strides) - sizes = _broadcast_params(sizes, self.num_features, "sizes") - aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") - angles = _broadcast_params(angles, self.num_features, "angles") - self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios, angles) - - self.offset = offset - assert 0.0 <= self.offset < 1.0, self.offset - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - return { - "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, - "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, - "strides": [x.stride for x in input_shape], - "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, - "angles": cfg.MODEL.ANCHOR_GENERATOR.ANGLES, - } - - def _calculate_anchors(self, sizes, aspect_ratios, angles): - cell_anchors = [ - self.generate_cell_anchors(size, aspect_ratio, angle).float() - for size, aspect_ratio, angle in zip(sizes, aspect_ratios, angles) - ] - return BufferList(cell_anchors) - - @property - def num_cell_anchors(self): - """ - Alias of `num_anchors`. - """ - return self.num_anchors - - @property - def num_anchors(self): - """ - Returns: - list[int]: Each int is the number of anchors at every pixel - location, on that feature map. - For example, if at every pixel we use anchors of 3 aspect - ratios, 2 sizes and 5 angles, the number of anchors is 30. - (See also ANCHOR_GENERATOR.SIZES, ANCHOR_GENERATOR.ASPECT_RATIOS - and ANCHOR_GENERATOR.ANGLES in config) - - In standard RRPN models, `num_anchors` on every feature map is the same. - """ - return [len(cell_anchors) for cell_anchors in self.cell_anchors] - - def _grid_anchors(self, grid_sizes): - anchors = [] - for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors): - shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors) - zeros = torch.zeros_like(shift_x) - shifts = torch.stack((shift_x, shift_y, zeros, zeros, zeros), dim=1) - - anchors.append((shifts.view(-1, 1, 5) + base_anchors.view(1, -1, 5)).reshape(-1, 5)) - - return anchors - - def generate_cell_anchors( - self, - sizes=(32, 64, 128, 256, 512), - aspect_ratios=(0.5, 1, 2), - angles=(-90, -60, -30, 0, 30, 60, 90), - ): - """ - Generate a tensor storing canonical anchor boxes, which are all anchor - boxes of different sizes, aspect_ratios, angles centered at (0, 0). - We can later build the set of anchors for a full feature map by - shifting and tiling these tensors (see `meth:_grid_anchors`). - - Args: - sizes (tuple[float]): - aspect_ratios (tuple[float]]): - angles (tuple[float]]): - - Returns: - Tensor of shape (len(sizes) * len(aspect_ratios) * len(angles), 5) - storing anchor boxes in (x_ctr, y_ctr, w, h, angle) format. - """ - anchors = [] - for size in sizes: - area = size**2.0 - for aspect_ratio in aspect_ratios: - # s * s = w * h - # a = h / w - # ... some algebra ... - # w = sqrt(s * s / a) - # h = a * w - w = math.sqrt(area / aspect_ratio) - h = aspect_ratio * w - anchors.extend([0, 0, w, h, a] for a in angles) - - return torch.tensor(anchors) - - def forward(self, features): - """ - Args: - features (list[Tensor]): list of backbone feature maps on which to generate anchors. - - Returns: - list[RotatedBoxes]: a list of Boxes containing all the anchors for each feature map - (i.e. the cell anchors repeated over all locations in the feature map). - The number of anchors of each feature map is Hi x Wi x num_cell_anchors, - where Hi, Wi are resolution of the feature map divided by anchor stride. - """ - grid_sizes = [feature_map.shape[-2:] for feature_map in features] - anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) - return [RotatedBoxes(x) for x in anchors_over_all_feature_maps] - - -def build_anchor_generator(cfg, input_shape): - """ - Built an anchor generator from `cfg.MODEL.ANCHOR_GENERATOR.NAME`. - """ - anchor_generator = cfg.MODEL.ANCHOR_GENERATOR.NAME - return ANCHOR_GENERATOR_REGISTRY.get(anchor_generator)(cfg, input_shape) diff --git a/spaces/nomic-ai/tatsu-lab_alpaca/style.css b/spaces/nomic-ai/tatsu-lab_alpaca/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/tatsu-lab_alpaca/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/oguzakif/video-object-remover/SiamMask/data/vid/gen_json.py b/spaces/oguzakif/video-object-remover/SiamMask/data/vid/gen_json.py deleted file mode 100644 index 00dd0df4521ca6aa91a7ba93aaac4f094c52164c..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/data/vid/gen_json.py +++ /dev/null @@ -1,85 +0,0 @@ -# -------------------------------------------------------- -# SiamMask -# Licensed under The MIT License -# Written by Qiang Wang (wangqiang2015 at ia.ac.cn) -# -------------------------------------------------------- -from os.path import join -from os import listdir -import json -import numpy as np - -print('load json (raw vid info), please wait 20 seconds~') -vid = json.load(open('vid.json', 'r')) - - -def check_size(frame_sz, bbox): - min_ratio = 0.1 - max_ratio = 0.75 - # only accept objects >10% and <75% of the total frame - area_ratio = np.sqrt((bbox[2]-bbox[0])*(bbox[3]-bbox[1])/float(np.prod(frame_sz))) - ok = (area_ratio > min_ratio) and (area_ratio < max_ratio) - return ok - - -def check_borders(frame_sz, bbox): - dist_from_border = 0.05 * (bbox[2] - bbox[0] + bbox[3] - bbox[1])/2 - ok = (bbox[0] > dist_from_border) and (bbox[1] > dist_from_border) and \ - ((frame_sz[0] - bbox[2]) > dist_from_border) and \ - ((frame_sz[1] - bbox[3]) > dist_from_border) - return ok - - -snippets = dict() -n_snippets = 0 -n_videos = 0 -for subset in vid: - for video in subset: - n_videos += 1 - frames = video['frame'] - id_set = [] - id_frames = [[]] * 60 # at most 60 objects - for f, frame in enumerate(frames): - objs = frame['objs'] - frame_sz = frame['frame_sz'] - for obj in objs: - trackid = obj['trackid'] - occluded = obj['occ'] - bbox = obj['bbox'] - # if occluded: - # continue - # - # if not(check_size(frame_sz, bbox) and check_borders(frame_sz, bbox)): - # continue - # - # if obj['c'] in ['n01674464', 'n01726692', 'n04468005', 'n02062744']: - # continue - - if trackid not in id_set: - id_set.append(trackid) - id_frames[trackid] = [] - id_frames[trackid].append(f) - if len(id_set) > 0: - snippets[video['base_path']] = dict() - for selected in id_set: - frame_ids = sorted(id_frames[selected]) - sequences = np.split(frame_ids, np.array(np.where(np.diff(frame_ids) > 1)[0]) + 1) - sequences = [s for s in sequences if len(s) > 1] # remove isolated frame. - for seq in sequences: - snippet = dict() - for frame_id in seq: - frame = frames[frame_id] - for obj in frame['objs']: - if obj['trackid'] == selected: - o = obj - continue - snippet[frame['img_path'].split('.')[0]] = o['bbox'] - snippets[video['base_path']]['{:02d}'.format(selected)] = snippet - n_snippets += 1 - print('video: {:d} snippets_num: {:d}'.format(n_videos, n_snippets)) - -train = {k:v for (k,v) in snippets.items() if 'train' in k} -val = {k:v for (k,v) in snippets.items() if 'val' in k} - -json.dump(train, open('train.json', 'w'), indent=4, sort_keys=True) -json.dump(val, open('val.json', 'w'), indent=4, sort_keys=True) -print('done!') diff --git a/spaces/oronird/sign_translate/app.py b/spaces/oronird/sign_translate/app.py deleted file mode 100644 index cb93e7be4a81597dde301aab52ac14413b3acda1..0000000000000000000000000000000000000000 --- a/spaces/oronird/sign_translate/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import gradio as gr -import numpy as np -import cv2 -import pickle -#from tensorflow import keras - -from stickman import image_pos_hands -from video_processing import frames_for_video, frames_to_video - - -def AWSL(path_video, recognition_type ): - frames = frames_for_video(path_video) - frames2 = [image_pos_hands(img, - output_image_res=(448,448)) for img in frames] - frames_array = np.array(frames2) - - vid_new_path = "test.mp4" - frames_to_video(frames2, vid_new_path) - - # model - model_path = "./inception_stickman_50.pkl" - #model = pickle.load(open(model_path, "rb")) - - return (vid_new_path, vid_new_path) - -iface = gr.Interface(fn=AWSL, - title="Video to Recognition", - description="Recognition the American Sign Language and translate it into English", - inputs=[gr.inputs.Video(label="Upload Video File"), - gr.inputs.Radio(label="Choose translation", choices=['Sign', 'Sentence']) - ], - #examples=[[test_video_path]], - outputs=[gr.Video(label="Generated Video"), - "text"]) -iface.launch() \ No newline at end of file diff --git a/spaces/osanseviero/esmfold_st/README.md b/spaces/osanseviero/esmfold_st/README.md deleted file mode 100644 index 05dd5285d6790dd99e581011e4074ab168072a1f..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/esmfold_st/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Esmfold St -emoji: 📉 -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/optimization/mps.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/optimization/mps.md deleted file mode 100644 index cd04d6d1103d5ecd83d7c983a99110928eb85c7e..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/optimization/mps.md +++ /dev/null @@ -1,71 +0,0 @@ - - -# Apple Silicon (M1/M2)에서 Stable Diffusion을 사용하는 방법 - -Diffusers는 Stable Diffusion 추론을 위해 PyTorch `mps`를 사용해 Apple 실리콘과 호환됩니다. 다음은 Stable Diffusion이 있는 M1 또는 M2 컴퓨터를 사용하기 위해 따라야 하는 단계입니다. - -## 요구 사항 - -- Apple silicon (M1/M2) 하드웨어의 Mac 컴퓨터. -- macOS 12.6 또는 이후 (13.0 또는 이후 추천). -- Python arm64 버전 -- PyTorch 2.0(추천) 또는 1.13(`mps`를 지원하는 최소 버전). Yhttps://pytorch.org/get-started/locally/의 지침에 따라 `pip` 또는 `conda`로 설치할 수 있습니다. - - -## 추론 파이프라인 - -아래 코도는 익숙한 `to()` 인터페이스를 사용하여 `mps` 백엔드로 Stable Diffusion 파이프라인을 M1 또는 M2 장치로 이동하는 방법을 보여줍니다. - - - - -**PyTorch 1.13을 사용 중일 때 ** 추가 일회성 전달을 사용하여 파이프라인을 "프라이밍"하는 것을 추천합니다. 이것은 발견한 이상한 문제에 대한 임시 해결 방법입니다. 첫 번째 추론 전달은 후속 전달와 약간 다른 결과를 생성합니다. 이 전달은 한 번만 수행하면 되며 추론 단계를 한 번만 사용하고 결과를 폐기해도 됩니다. - - - -이전 팁에서 설명한 것들을 포함한 여러 문제를 해결하므로 PyTorch 2 이상을 사용하는 것이 좋습니다. - - -```python -# `huggingface-cli login`에 로그인되어 있음을 확인 -from diffusers import DiffusionPipeline - -pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") -pipe = pipe.to("mps") - -# 컴퓨터가 64GB 이하의 RAM 램일 때 추천 -pipe.enable_attention_slicing() - -prompt = "a photo of an astronaut riding a horse on mars" - -# 처음 "워밍업" 전달 (위 설명을 보세요) -_ = pipe(prompt, num_inference_steps=1) - -# 결과는 워밍업 전달 후의 CPU 장치의 결과와 일치합니다. -image = pipe(prompt).images[0] -``` - -## 성능 추천 - -M1/M2 성능은 메모리 압력에 매우 민감합니다. 시스템은 필요한 경우 자동으로 스왑되지만 스왑할 때 성능이 크게 저하됩니다. - - -특히 컴퓨터의 시스템 RAM이 64GB 미만이거나 512 × 512픽셀보다 큰 비표준 해상도에서 이미지를 생성하는 경우, 추론 중에 메모리 압력을 줄이고 스와핑을 방지하기 위해 *어텐션 슬라이싱*을 사용하는 것이 좋습니다. 어텐션 슬라이싱은 비용이 많이 드는 어텐션 작업을 한 번에 모두 수행하는 대신 여러 단계로 수행합니다. 일반적으로 범용 메모리가 없는 컴퓨터에서 ~20%의 성능 영향을 미치지만 64GB 이상이 아닌 경우 대부분의 Apple Silicon 컴퓨터에서 *더 나은 성능*이 관찰되었습니다. - -```python -pipeline.enable_attention_slicing() -``` - -## Known Issues - -- 여러 프롬프트를 배치로 생성하는 것은 [충돌이 발생하거나 안정적으로 작동하지 않습니다](https://github.com/huggingface/diffusers/issues/363). 우리는 이것이 [PyTorch의 `mps` 백엔드](https://github.com/pytorch/pytorch/issues/84039)와 관련이 있다고 생각합니다. 이 문제는 해결되고 있지만 지금은 배치 대신 반복 방법을 사용하는 것이 좋습니다. \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_original_musicldm_to_diffusers.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_original_musicldm_to_diffusers.py deleted file mode 100644 index bbc2fc96f89fbab84128368c6e5d85c5f2a5e577..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_original_musicldm_to_diffusers.py +++ /dev/null @@ -1,1064 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Conversion script for the MusicLDM checkpoints.""" - -import argparse -import re - -import torch -from transformers import ( - AutoFeatureExtractor, - AutoTokenizer, - ClapConfig, - ClapModel, - SpeechT5HifiGan, - SpeechT5HifiGanConfig, -) - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - HeunDiscreteScheduler, - LMSDiscreteScheduler, - MusicLDMPipeline, - PNDMScheduler, - UNet2DConditionModel, -) -from diffusers.utils import is_omegaconf_available -from diffusers.utils.import_utils import BACKENDS_MAPPING - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.shave_segments -def shave_segments(path, n_shave_prefix_segments=1): - """ - Removes segments. Positive values shave the first segments, negative shave the last segments. - """ - if n_shave_prefix_segments >= 0: - return ".".join(path.split(".")[n_shave_prefix_segments:]) - else: - return ".".join(path.split(".")[:n_shave_prefix_segments]) - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_resnet_paths -def renew_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item.replace("in_layers.0", "norm1") - new_item = new_item.replace("in_layers.2", "conv1") - - new_item = new_item.replace("out_layers.0", "norm2") - new_item = new_item.replace("out_layers.3", "conv2") - - new_item = new_item.replace("emb_layers.1", "time_emb_proj") - new_item = new_item.replace("skip_connection", "conv_shortcut") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_resnet_paths -def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("nin_shortcut", "conv_shortcut") - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_attention_paths -def renew_attention_paths(old_list): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - # new_item = new_item.replace('norm.weight', 'group_norm.weight') - # new_item = new_item.replace('norm.bias', 'group_norm.bias') - - # new_item = new_item.replace('proj_out.weight', 'proj_attn.weight') - # new_item = new_item.replace('proj_out.bias', 'proj_attn.bias') - - # new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("norm.weight", "group_norm.weight") - new_item = new_item.replace("norm.bias", "group_norm.bias") - - new_item = new_item.replace("q.weight", "to_q.weight") - new_item = new_item.replace("q.bias", "to_q.bias") - - new_item = new_item.replace("k.weight", "to_k.weight") - new_item = new_item.replace("k.bias", "to_k.bias") - - new_item = new_item.replace("v.weight", "to_v.weight") - new_item = new_item.replace("v.bias", "to_v.bias") - - new_item = new_item.replace("proj_out.weight", "to_out.0.weight") - new_item = new_item.replace("proj_out.bias", "to_out.0.bias") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.assign_to_checkpoint -def assign_to_checkpoint( - paths, checkpoint, old_checkpoint, attention_paths_to_split=None, additional_replacements=None, config=None -): - """ - This does the final conversion step: take locally converted weights and apply a global renaming to them. It splits - attention layers, and takes into account additional replacements that may arise. - - Assigns the weights to the new checkpoint. - """ - assert isinstance(paths, list), "Paths should be a list of dicts containing 'old' and 'new' keys." - - # Splits the attention layers into three variables. - if attention_paths_to_split is not None: - for path, path_map in attention_paths_to_split.items(): - old_tensor = old_checkpoint[path] - channels = old_tensor.shape[0] // 3 - - target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1) - - num_heads = old_tensor.shape[0] // config["num_head_channels"] // 3 - - old_tensor = old_tensor.reshape((num_heads, 3 * channels // num_heads) + old_tensor.shape[1:]) - query, key, value = old_tensor.split(channels // num_heads, dim=1) - - checkpoint[path_map["query"]] = query.reshape(target_shape) - checkpoint[path_map["key"]] = key.reshape(target_shape) - checkpoint[path_map["value"]] = value.reshape(target_shape) - - for path in paths: - new_path = path["new"] - - # These have already been assigned - if attention_paths_to_split is not None and new_path in attention_paths_to_split: - continue - - # Global renaming happens here - new_path = new_path.replace("middle_block.0", "mid_block.resnets.0") - new_path = new_path.replace("middle_block.1", "mid_block.attentions.0") - new_path = new_path.replace("middle_block.2", "mid_block.resnets.1") - - if additional_replacements is not None: - for replacement in additional_replacements: - new_path = new_path.replace(replacement["old"], replacement["new"]) - - # proj_attn.weight has to be converted from conv 1D to linear - if "proj_attn.weight" in new_path: - checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0] - else: - checkpoint[new_path] = old_checkpoint[path["old"]] - - -def conv_attn_to_linear(checkpoint): - keys = list(checkpoint.keys()) - attn_keys = ["to_q.weight", "to_k.weight", "to_v.weight"] - proj_key = "to_out.0.weight" - for key in keys: - if ".".join(key.split(".")[-2:]) in attn_keys or ".".join(key.split(".")[-3:]) == proj_key: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key].squeeze() - - -def create_unet_diffusers_config(original_config, image_size: int): - """ - Creates a UNet config for diffusers based on the config of the original MusicLDM model. - """ - unet_params = original_config.model.params.unet_config.params - vae_params = original_config.model.params.first_stage_config.params.ddconfig - - block_out_channels = [unet_params.model_channels * mult for mult in unet_params.channel_mult] - - down_block_types = [] - resolution = 1 - for i in range(len(block_out_channels)): - block_type = "CrossAttnDownBlock2D" if resolution in unet_params.attention_resolutions else "DownBlock2D" - down_block_types.append(block_type) - if i != len(block_out_channels) - 1: - resolution *= 2 - - up_block_types = [] - for i in range(len(block_out_channels)): - block_type = "CrossAttnUpBlock2D" if resolution in unet_params.attention_resolutions else "UpBlock2D" - up_block_types.append(block_type) - resolution //= 2 - - vae_scale_factor = 2 ** (len(vae_params.ch_mult) - 1) - - cross_attention_dim = ( - unet_params.cross_attention_dim if "cross_attention_dim" in unet_params else block_out_channels - ) - - class_embed_type = "simple_projection" if "extra_film_condition_dim" in unet_params else None - projection_class_embeddings_input_dim = ( - unet_params.extra_film_condition_dim if "extra_film_condition_dim" in unet_params else None - ) - class_embeddings_concat = unet_params.extra_film_use_concat if "extra_film_use_concat" in unet_params else None - - config = { - "sample_size": image_size // vae_scale_factor, - "in_channels": unet_params.in_channels, - "out_channels": unet_params.out_channels, - "down_block_types": tuple(down_block_types), - "up_block_types": tuple(up_block_types), - "block_out_channels": tuple(block_out_channels), - "layers_per_block": unet_params.num_res_blocks, - "cross_attention_dim": cross_attention_dim, - "class_embed_type": class_embed_type, - "projection_class_embeddings_input_dim": projection_class_embeddings_input_dim, - "class_embeddings_concat": class_embeddings_concat, - } - - return config - - -# Adapted from diffusers.pipelines.stable_diffusion.convert_from_ckpt.create_vae_diffusers_config -def create_vae_diffusers_config(original_config, checkpoint, image_size: int): - """ - Creates a VAE config for diffusers based on the config of the original MusicLDM model. Compared to the original - Stable Diffusion conversion, this function passes a *learnt* VAE scaling factor to the diffusers VAE. - """ - vae_params = original_config.model.params.first_stage_config.params.ddconfig - _ = original_config.model.params.first_stage_config.params.embed_dim - - block_out_channels = [vae_params.ch * mult for mult in vae_params.ch_mult] - down_block_types = ["DownEncoderBlock2D"] * len(block_out_channels) - up_block_types = ["UpDecoderBlock2D"] * len(block_out_channels) - - scaling_factor = checkpoint["scale_factor"] if "scale_by_std" in original_config.model.params else 0.18215 - - config = { - "sample_size": image_size, - "in_channels": vae_params.in_channels, - "out_channels": vae_params.out_ch, - "down_block_types": tuple(down_block_types), - "up_block_types": tuple(up_block_types), - "block_out_channels": tuple(block_out_channels), - "latent_channels": vae_params.z_channels, - "layers_per_block": vae_params.num_res_blocks, - "scaling_factor": float(scaling_factor), - } - return config - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.create_diffusers_schedular -def create_diffusers_schedular(original_config): - schedular = DDIMScheduler( - num_train_timesteps=original_config.model.params.timesteps, - beta_start=original_config.model.params.linear_start, - beta_end=original_config.model.params.linear_end, - beta_schedule="scaled_linear", - ) - return schedular - - -def convert_ldm_unet_checkpoint(checkpoint, config, path=None, extract_ema=False): - """ - Takes a state dict and a config, and returns a converted checkpoint. Compared to the original Stable Diffusion - conversion, this function additionally converts the learnt film embedding linear layer. - """ - - # extract state_dict for UNet - unet_state_dict = {} - keys = list(checkpoint.keys()) - - unet_key = "model.diffusion_model." - # at least a 100 parameters have to start with `model_ema` in order for the checkpoint to be EMA - if sum(k.startswith("model_ema") for k in keys) > 100 and extract_ema: - print(f"Checkpoint {path} has both EMA and non-EMA weights.") - print( - "In this conversion only the EMA weights are extracted. If you want to instead extract the non-EMA" - " weights (useful to continue fine-tuning), please make sure to remove the `--extract_ema` flag." - ) - for key in keys: - if key.startswith("model.diffusion_model"): - flat_ema_key = "model_ema." + "".join(key.split(".")[1:]) - unet_state_dict[key.replace(unet_key, "")] = checkpoint.pop(flat_ema_key) - else: - if sum(k.startswith("model_ema") for k in keys) > 100: - print( - "In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA" - " weights (usually better for inference), please make sure to add the `--extract_ema` flag." - ) - - for key in keys: - if key.startswith(unet_key): - unet_state_dict[key.replace(unet_key, "")] = checkpoint.pop(key) - - new_checkpoint = {} - - new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"] - new_checkpoint["time_embedding.linear_1.bias"] = unet_state_dict["time_embed.0.bias"] - new_checkpoint["time_embedding.linear_2.weight"] = unet_state_dict["time_embed.2.weight"] - new_checkpoint["time_embedding.linear_2.bias"] = unet_state_dict["time_embed.2.bias"] - - new_checkpoint["class_embedding.weight"] = unet_state_dict["film_emb.weight"] - new_checkpoint["class_embedding.bias"] = unet_state_dict["film_emb.bias"] - - new_checkpoint["conv_in.weight"] = unet_state_dict["input_blocks.0.0.weight"] - new_checkpoint["conv_in.bias"] = unet_state_dict["input_blocks.0.0.bias"] - - new_checkpoint["conv_norm_out.weight"] = unet_state_dict["out.0.weight"] - new_checkpoint["conv_norm_out.bias"] = unet_state_dict["out.0.bias"] - new_checkpoint["conv_out.weight"] = unet_state_dict["out.2.weight"] - new_checkpoint["conv_out.bias"] = unet_state_dict["out.2.bias"] - - # Retrieves the keys for the input blocks only - num_input_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "input_blocks" in layer}) - input_blocks = { - layer_id: [key for key in unet_state_dict if f"input_blocks.{layer_id}" in key] - for layer_id in range(num_input_blocks) - } - - # Retrieves the keys for the middle blocks only - num_middle_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "middle_block" in layer}) - middle_blocks = { - layer_id: [key for key in unet_state_dict if f"middle_block.{layer_id}" in key] - for layer_id in range(num_middle_blocks) - } - - # Retrieves the keys for the output blocks only - num_output_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "output_blocks" in layer}) - output_blocks = { - layer_id: [key for key in unet_state_dict if f"output_blocks.{layer_id}" in key] - for layer_id in range(num_output_blocks) - } - - for i in range(1, num_input_blocks): - block_id = (i - 1) // (config["layers_per_block"] + 1) - layer_in_block_id = (i - 1) % (config["layers_per_block"] + 1) - - resnets = [ - key for key in input_blocks[i] if f"input_blocks.{i}.0" in key and f"input_blocks.{i}.0.op" not in key - ] - attentions = [key for key in input_blocks[i] if f"input_blocks.{i}.1" in key] - - if f"input_blocks.{i}.0.op.weight" in unet_state_dict: - new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.weight"] = unet_state_dict.pop( - f"input_blocks.{i}.0.op.weight" - ) - new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.bias"] = unet_state_dict.pop( - f"input_blocks.{i}.0.op.bias" - ) - - paths = renew_resnet_paths(resnets) - meta_path = {"old": f"input_blocks.{i}.0", "new": f"down_blocks.{block_id}.resnets.{layer_in_block_id}"} - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = {"old": f"input_blocks.{i}.1", "new": f"down_blocks.{block_id}.attentions.{layer_in_block_id}"} - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - resnet_0 = middle_blocks[0] - attentions = middle_blocks[1] - resnet_1 = middle_blocks[2] - - resnet_0_paths = renew_resnet_paths(resnet_0) - assign_to_checkpoint(resnet_0_paths, new_checkpoint, unet_state_dict, config=config) - - resnet_1_paths = renew_resnet_paths(resnet_1) - assign_to_checkpoint(resnet_1_paths, new_checkpoint, unet_state_dict, config=config) - - attentions_paths = renew_attention_paths(attentions) - meta_path = {"old": "middle_block.1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - attentions_paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - for i in range(num_output_blocks): - block_id = i // (config["layers_per_block"] + 1) - layer_in_block_id = i % (config["layers_per_block"] + 1) - output_block_layers = [shave_segments(name, 2) for name in output_blocks[i]] - output_block_list = {} - - for layer in output_block_layers: - layer_id, layer_name = layer.split(".")[0], shave_segments(layer, 1) - if layer_id in output_block_list: - output_block_list[layer_id].append(layer_name) - else: - output_block_list[layer_id] = [layer_name] - - if len(output_block_list) > 1: - resnets = [key for key in output_blocks[i] if f"output_blocks.{i}.0" in key] - attentions = [key for key in output_blocks[i] if f"output_blocks.{i}.1" in key] - - resnet_0_paths = renew_resnet_paths(resnets) - paths = renew_resnet_paths(resnets) - - meta_path = {"old": f"output_blocks.{i}.0", "new": f"up_blocks.{block_id}.resnets.{layer_in_block_id}"} - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - output_block_list = {k: sorted(v) for k, v in output_block_list.items()} - if ["conv.bias", "conv.weight"] in output_block_list.values(): - index = list(output_block_list.values()).index(["conv.bias", "conv.weight"]) - new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.weight"] = unet_state_dict[ - f"output_blocks.{i}.{index}.conv.weight" - ] - new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.bias"] = unet_state_dict[ - f"output_blocks.{i}.{index}.conv.bias" - ] - - # Clear attentions as they have been attributed above. - if len(attentions) == 2: - attentions = [] - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = { - "old": f"output_blocks.{i}.1", - "new": f"up_blocks.{block_id}.attentions.{layer_in_block_id}", - } - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - else: - resnet_0_paths = renew_resnet_paths(output_block_layers, n_shave_prefix_segments=1) - for path in resnet_0_paths: - old_path = ".".join(["output_blocks", str(i), path["old"]]) - new_path = ".".join(["up_blocks", str(block_id), "resnets", str(layer_in_block_id), path["new"]]) - - new_checkpoint[new_path] = unet_state_dict[old_path] - - return new_checkpoint - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.convert_ldm_vae_checkpoint -def convert_ldm_vae_checkpoint(checkpoint, config): - # extract state dict for VAE - vae_state_dict = {} - vae_key = "first_stage_model." - keys = list(checkpoint.keys()) - for key in keys: - if key.startswith(vae_key): - vae_state_dict[key.replace(vae_key, "")] = checkpoint.get(key) - - new_checkpoint = {} - - new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] - new_checkpoint["encoder.conv_in.bias"] = vae_state_dict["encoder.conv_in.bias"] - new_checkpoint["encoder.conv_out.weight"] = vae_state_dict["encoder.conv_out.weight"] - new_checkpoint["encoder.conv_out.bias"] = vae_state_dict["encoder.conv_out.bias"] - new_checkpoint["encoder.conv_norm_out.weight"] = vae_state_dict["encoder.norm_out.weight"] - new_checkpoint["encoder.conv_norm_out.bias"] = vae_state_dict["encoder.norm_out.bias"] - - new_checkpoint["decoder.conv_in.weight"] = vae_state_dict["decoder.conv_in.weight"] - new_checkpoint["decoder.conv_in.bias"] = vae_state_dict["decoder.conv_in.bias"] - new_checkpoint["decoder.conv_out.weight"] = vae_state_dict["decoder.conv_out.weight"] - new_checkpoint["decoder.conv_out.bias"] = vae_state_dict["decoder.conv_out.bias"] - new_checkpoint["decoder.conv_norm_out.weight"] = vae_state_dict["decoder.norm_out.weight"] - new_checkpoint["decoder.conv_norm_out.bias"] = vae_state_dict["decoder.norm_out.bias"] - - new_checkpoint["quant_conv.weight"] = vae_state_dict["quant_conv.weight"] - new_checkpoint["quant_conv.bias"] = vae_state_dict["quant_conv.bias"] - new_checkpoint["post_quant_conv.weight"] = vae_state_dict["post_quant_conv.weight"] - new_checkpoint["post_quant_conv.bias"] = vae_state_dict["post_quant_conv.bias"] - - # Retrieves the keys for the encoder down blocks only - num_down_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "encoder.down" in layer}) - down_blocks = { - layer_id: [key for key in vae_state_dict if f"down.{layer_id}" in key] for layer_id in range(num_down_blocks) - } - - # Retrieves the keys for the decoder up blocks only - num_up_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "decoder.up" in layer}) - up_blocks = { - layer_id: [key for key in vae_state_dict if f"up.{layer_id}" in key] for layer_id in range(num_up_blocks) - } - - for i in range(num_down_blocks): - resnets = [key for key in down_blocks[i] if f"down.{i}" in key and f"down.{i}.downsample" not in key] - - if f"encoder.down.{i}.downsample.conv.weight" in vae_state_dict: - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.weight"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.weight" - ) - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.bias"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.bias" - ) - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"down.{i}.block", "new": f"down_blocks.{i}.resnets"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_resnets = [key for key in vae_state_dict if "encoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"encoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_attentions = [key for key in vae_state_dict if "encoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - conv_attn_to_linear(new_checkpoint) - - for i in range(num_up_blocks): - block_id = num_up_blocks - 1 - i - resnets = [ - key for key in up_blocks[block_id] if f"up.{block_id}" in key and f"up.{block_id}.upsample" not in key - ] - - if f"decoder.up.{block_id}.upsample.conv.weight" in vae_state_dict: - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.weight"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.weight" - ] - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.bias"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.bias" - ] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"up.{block_id}.block", "new": f"up_blocks.{i}.resnets"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_resnets = [key for key in vae_state_dict if "decoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"decoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_attentions = [key for key in vae_state_dict if "decoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - conv_attn_to_linear(new_checkpoint) - return new_checkpoint - - -CLAP_KEYS_TO_MODIFY_MAPPING = { - "text_branch": "text_model", - "audio_branch": "audio_model.audio_encoder", - "attn": "attention.self", - "self.proj": "output.dense", - "attention.self_mask": "attn_mask", - "mlp.fc1": "intermediate.dense", - "mlp.fc2": "output.dense", - "norm1": "layernorm_before", - "norm2": "layernorm_after", - "bn0": "batch_norm", -} - -CLAP_KEYS_TO_IGNORE = [ - "text_transform", - "audio_transform", - "stft", - "logmel_extractor", - "tscam_conv", - "head", - "attn_mask", -] - -CLAP_EXPECTED_MISSING_KEYS = ["text_model.embeddings.token_type_ids"] - - -def convert_open_clap_checkpoint(checkpoint): - """ - Takes a state dict and returns a converted CLAP checkpoint. - """ - # extract state dict for CLAP text embedding model, discarding the audio component - model_state_dict = {} - model_key = "cond_stage_model.model." - keys = list(checkpoint.keys()) - for key in keys: - if key.startswith(model_key): - model_state_dict[key.replace(model_key, "")] = checkpoint.get(key) - - new_checkpoint = {} - - sequential_layers_pattern = r".*sequential.(\d+).*" - text_projection_pattern = r".*_projection.(\d+).*" - - for key, value in model_state_dict.items(): - # check if key should be ignored in mapping - if so map it to a key name that we'll filter out at the end - for key_to_ignore in CLAP_KEYS_TO_IGNORE: - if key_to_ignore in key: - key = "spectrogram" - - # check if any key needs to be modified - for key_to_modify, new_key in CLAP_KEYS_TO_MODIFY_MAPPING.items(): - if key_to_modify in key: - key = key.replace(key_to_modify, new_key) - - if re.match(sequential_layers_pattern, key): - # replace sequential layers with list - sequential_layer = re.match(sequential_layers_pattern, key).group(1) - - key = key.replace(f"sequential.{sequential_layer}.", f"layers.{int(sequential_layer)//3}.linear.") - elif re.match(text_projection_pattern, key): - projecton_layer = int(re.match(text_projection_pattern, key).group(1)) - - # Because in CLAP they use `nn.Sequential`... - transformers_projection_layer = 1 if projecton_layer == 0 else 2 - - key = key.replace(f"_projection.{projecton_layer}.", f"_projection.linear{transformers_projection_layer}.") - - if "audio" and "qkv" in key: - # split qkv into query key and value - mixed_qkv = value - qkv_dim = mixed_qkv.size(0) // 3 - - query_layer = mixed_qkv[:qkv_dim] - key_layer = mixed_qkv[qkv_dim : qkv_dim * 2] - value_layer = mixed_qkv[qkv_dim * 2 :] - - new_checkpoint[key.replace("qkv", "query")] = query_layer - new_checkpoint[key.replace("qkv", "key")] = key_layer - new_checkpoint[key.replace("qkv", "value")] = value_layer - elif key != "spectrogram": - new_checkpoint[key] = value - - return new_checkpoint - - -def create_transformers_vocoder_config(original_config): - """ - Creates a config for transformers SpeechT5HifiGan based on the config of the vocoder model. - """ - vocoder_params = original_config.model.params.vocoder_config.params - - config = { - "model_in_dim": vocoder_params.num_mels, - "sampling_rate": vocoder_params.sampling_rate, - "upsample_initial_channel": vocoder_params.upsample_initial_channel, - "upsample_rates": list(vocoder_params.upsample_rates), - "upsample_kernel_sizes": list(vocoder_params.upsample_kernel_sizes), - "resblock_kernel_sizes": list(vocoder_params.resblock_kernel_sizes), - "resblock_dilation_sizes": [ - list(resblock_dilation) for resblock_dilation in vocoder_params.resblock_dilation_sizes - ], - "normalize_before": False, - } - - return config - - -def convert_hifigan_checkpoint(checkpoint, config): - """ - Takes a state dict and config, and returns a converted HiFiGAN vocoder checkpoint. - """ - # extract state dict for vocoder - vocoder_state_dict = {} - vocoder_key = "first_stage_model.vocoder." - keys = list(checkpoint.keys()) - for key in keys: - if key.startswith(vocoder_key): - vocoder_state_dict[key.replace(vocoder_key, "")] = checkpoint.get(key) - - # fix upsampler keys, everything else is correct already - for i in range(len(config.upsample_rates)): - vocoder_state_dict[f"upsampler.{i}.weight"] = vocoder_state_dict.pop(f"ups.{i}.weight") - vocoder_state_dict[f"upsampler.{i}.bias"] = vocoder_state_dict.pop(f"ups.{i}.bias") - - if not config.normalize_before: - # if we don't set normalize_before then these variables are unused, so we set them to their initialised values - vocoder_state_dict["mean"] = torch.zeros(config.model_in_dim) - vocoder_state_dict["scale"] = torch.ones(config.model_in_dim) - - return vocoder_state_dict - - -# Adapted from https://huggingface.co/spaces/haoheliu/MusicLDM-text-to-audio-generation/blob/84a0384742a22bd80c44e903e241f0623e874f1d/MusicLDM/utils.py#L72-L73 -DEFAULT_CONFIG = { - "model": { - "params": { - "linear_start": 0.0015, - "linear_end": 0.0195, - "timesteps": 1000, - "channels": 8, - "scale_by_std": True, - "unet_config": { - "target": "MusicLDM.latent_diffusion.openaimodel.UNetModel", - "params": { - "extra_film_condition_dim": 512, - "extra_film_use_concat": True, - "in_channels": 8, - "out_channels": 8, - "model_channels": 128, - "attention_resolutions": [8, 4, 2], - "num_res_blocks": 2, - "channel_mult": [1, 2, 3, 5], - "num_head_channels": 32, - }, - }, - "first_stage_config": { - "target": "MusicLDM.variational_autoencoder.autoencoder.AutoencoderKL", - "params": { - "embed_dim": 8, - "ddconfig": { - "z_channels": 8, - "resolution": 256, - "in_channels": 1, - "out_ch": 1, - "ch": 128, - "ch_mult": [1, 2, 4], - "num_res_blocks": 2, - }, - }, - }, - "vocoder_config": { - "target": "MusicLDM.first_stage_model.vocoder", - "params": { - "upsample_rates": [5, 4, 2, 2, 2], - "upsample_kernel_sizes": [16, 16, 8, 4, 4], - "upsample_initial_channel": 1024, - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "num_mels": 64, - "sampling_rate": 16000, - }, - }, - }, - }, -} - - -def load_pipeline_from_original_MusicLDM_ckpt( - checkpoint_path: str, - original_config_file: str = None, - image_size: int = 1024, - prediction_type: str = None, - extract_ema: bool = False, - scheduler_type: str = "ddim", - num_in_channels: int = None, - model_channels: int = None, - num_head_channels: int = None, - device: str = None, - from_safetensors: bool = False, -) -> MusicLDMPipeline: - """ - Load an MusicLDM pipeline object from a `.ckpt`/`.safetensors` file and (ideally) a `.yaml` config file. - - Although many of the arguments can be automatically inferred, some of these rely on brittle checks against the - global step count, which will likely fail for models that have undergone further fine-tuning. Therefore, it is - recommended that you override the default values and/or supply an `original_config_file` wherever possible. - - Args: - checkpoint_path (`str`): Path to `.ckpt` file. - original_config_file (`str`): - Path to `.yaml` config file corresponding to the original architecture. If `None`, will be automatically - set to the MusicLDM-s-full-v2 config. - image_size (`int`, *optional*, defaults to 1024): - The image size that the model was trained on. - prediction_type (`str`, *optional*): - The prediction type that the model was trained on. If `None`, will be automatically - inferred by looking for a key in the config. For the default config, the prediction type is `'epsilon'`. - num_in_channels (`int`, *optional*, defaults to None): - The number of UNet input channels. If `None`, it will be automatically inferred from the config. - model_channels (`int`, *optional*, defaults to None): - The number of UNet model channels. If `None`, it will be automatically inferred from the config. Override - to 128 for the small checkpoints, 192 for the medium checkpoints and 256 for the large. - num_head_channels (`int`, *optional*, defaults to None): - The number of UNet head channels. If `None`, it will be automatically inferred from the config. Override - to 32 for the small and medium checkpoints, and 64 for the large. - scheduler_type (`str`, *optional*, defaults to 'pndm'): - Type of scheduler to use. Should be one of `["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", - "ddim"]`. - extract_ema (`bool`, *optional*, defaults to `False`): Only relevant for - checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights or not. Defaults to - `False`. Pass `True` to extract the EMA weights. EMA weights usually yield higher quality images for - inference. Non-EMA weights are usually better to continue fine-tuning. - device (`str`, *optional*, defaults to `None`): - The device to use. Pass `None` to determine automatically. - from_safetensors (`str`, *optional*, defaults to `False`): - If `checkpoint_path` is in `safetensors` format, load checkpoint with safetensors instead of PyTorch. - return: An MusicLDMPipeline object representing the passed-in `.ckpt`/`.safetensors` file. - """ - - if not is_omegaconf_available(): - raise ValueError(BACKENDS_MAPPING["omegaconf"][1]) - - from omegaconf import OmegaConf - - if from_safetensors: - from safetensors import safe_open - - checkpoint = {} - with safe_open(checkpoint_path, framework="pt", device="cpu") as f: - for key in f.keys(): - checkpoint[key] = f.get_tensor(key) - else: - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - checkpoint = torch.load(checkpoint_path, map_location=device) - else: - checkpoint = torch.load(checkpoint_path, map_location=device) - - if "state_dict" in checkpoint: - checkpoint = checkpoint["state_dict"] - - if original_config_file is None: - original_config = DEFAULT_CONFIG - original_config = OmegaConf.create(original_config) - else: - original_config = OmegaConf.load(original_config_file) - - if num_in_channels is not None: - original_config["model"]["params"]["unet_config"]["params"]["in_channels"] = num_in_channels - - if model_channels is not None: - original_config["model"]["params"]["unet_config"]["params"]["model_channels"] = model_channels - - if num_head_channels is not None: - original_config["model"]["params"]["unet_config"]["params"]["num_head_channels"] = num_head_channels - - if ( - "parameterization" in original_config["model"]["params"] - and original_config["model"]["params"]["parameterization"] == "v" - ): - if prediction_type is None: - prediction_type = "v_prediction" - else: - if prediction_type is None: - prediction_type = "epsilon" - - if image_size is None: - image_size = 512 - - num_train_timesteps = original_config.model.params.timesteps - beta_start = original_config.model.params.linear_start - beta_end = original_config.model.params.linear_end - - scheduler = DDIMScheduler( - beta_end=beta_end, - beta_schedule="scaled_linear", - beta_start=beta_start, - num_train_timesteps=num_train_timesteps, - steps_offset=1, - clip_sample=False, - set_alpha_to_one=False, - prediction_type=prediction_type, - ) - # make sure scheduler works correctly with DDIM - scheduler.register_to_config(clip_sample=False) - - if scheduler_type == "pndm": - config = dict(scheduler.config) - config["skip_prk_steps"] = True - scheduler = PNDMScheduler.from_config(config) - elif scheduler_type == "lms": - scheduler = LMSDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "heun": - scheduler = HeunDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "euler": - scheduler = EulerDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "euler-ancestral": - scheduler = EulerAncestralDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "dpm": - scheduler = DPMSolverMultistepScheduler.from_config(scheduler.config) - elif scheduler_type == "ddim": - scheduler = scheduler - else: - raise ValueError(f"Scheduler of type {scheduler_type} doesn't exist!") - - # Convert the UNet2DModel - unet_config = create_unet_diffusers_config(original_config, image_size=image_size) - unet = UNet2DConditionModel(**unet_config) - - converted_unet_checkpoint = convert_ldm_unet_checkpoint( - checkpoint, unet_config, path=checkpoint_path, extract_ema=extract_ema - ) - - unet.load_state_dict(converted_unet_checkpoint) - - # Convert the VAE model - vae_config = create_vae_diffusers_config(original_config, checkpoint=checkpoint, image_size=image_size) - converted_vae_checkpoint = convert_ldm_vae_checkpoint(checkpoint, vae_config) - - vae = AutoencoderKL(**vae_config) - vae.load_state_dict(converted_vae_checkpoint) - - # Convert the text model - # MusicLDM uses the same tokenizer as the original CLAP model, but a slightly different configuration - config = ClapConfig.from_pretrained("laion/clap-htsat-unfused") - config.audio_config.update( - { - "patch_embeds_hidden_size": 128, - "hidden_size": 1024, - "depths": [2, 2, 12, 2], - } - ) - tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused") - feature_extractor = AutoFeatureExtractor.from_pretrained("laion/clap-htsat-unfused") - - converted_text_model = convert_open_clap_checkpoint(checkpoint) - text_model = ClapModel(config) - - missing_keys, unexpected_keys = text_model.load_state_dict(converted_text_model, strict=False) - # we expect not to have token_type_ids in our original state dict so let's ignore them - missing_keys = list(set(missing_keys) - set(CLAP_EXPECTED_MISSING_KEYS)) - - if len(unexpected_keys) > 0: - raise ValueError(f"Unexpected keys when loading CLAP model: {unexpected_keys}") - - if len(missing_keys) > 0: - raise ValueError(f"Missing keys when loading CLAP model: {missing_keys}") - - # Convert the vocoder model - vocoder_config = create_transformers_vocoder_config(original_config) - vocoder_config = SpeechT5HifiGanConfig(**vocoder_config) - converted_vocoder_checkpoint = convert_hifigan_checkpoint(checkpoint, vocoder_config) - - vocoder = SpeechT5HifiGan(vocoder_config) - vocoder.load_state_dict(converted_vocoder_checkpoint) - - # Instantiate the diffusers pipeline - pipe = MusicLDMPipeline( - vae=vae, - text_encoder=text_model, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - vocoder=vocoder, - feature_extractor=feature_extractor, - ) - - return pipe - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." - ) - parser.add_argument( - "--original_config_file", - default=None, - type=str, - help="The YAML config file corresponding to the original architecture.", - ) - parser.add_argument( - "--num_in_channels", - default=None, - type=int, - help="The number of input channels. If `None` number of input channels will be automatically inferred.", - ) - parser.add_argument( - "--model_channels", - default=None, - type=int, - help="The number of UNet model channels. If `None`, it will be automatically inferred from the config. Override" - " to 128 for the small checkpoints, 192 for the medium checkpoints and 256 for the large.", - ) - parser.add_argument( - "--num_head_channels", - default=None, - type=int, - help="The number of UNet head channels. If `None`, it will be automatically inferred from the config. Override" - " to 32 for the small and medium checkpoints, and 64 for the large.", - ) - parser.add_argument( - "--scheduler_type", - default="ddim", - type=str, - help="Type of scheduler to use. Should be one of ['pndm', 'lms', 'ddim', 'euler', 'euler-ancestral', 'dpm']", - ) - parser.add_argument( - "--image_size", - default=None, - type=int, - help=("The image size that the model was trained on."), - ) - parser.add_argument( - "--prediction_type", - default=None, - type=str, - help=("The prediction type that the model was trained on."), - ) - parser.add_argument( - "--extract_ema", - action="store_true", - help=( - "Only relevant for checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights" - " or not. Defaults to `False`. Add `--extract_ema` to extract the EMA weights. EMA weights usually yield" - " higher quality images for inference. Non-EMA weights are usually better to continue fine-tuning." - ), - ) - parser.add_argument( - "--from_safetensors", - action="store_true", - help="If `--checkpoint_path` is in `safetensors` format, load checkpoint with safetensors instead of PyTorch.", - ) - parser.add_argument( - "--to_safetensors", - action="store_true", - help="Whether to store pipeline in safetensors format or not.", - ) - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - parser.add_argument("--device", type=str, help="Device to use (e.g. cpu, cuda:0, cuda:1, etc.)") - args = parser.parse_args() - - pipe = load_pipeline_from_original_MusicLDM_ckpt( - checkpoint_path=args.checkpoint_path, - original_config_file=args.original_config_file, - image_size=args.image_size, - prediction_type=args.prediction_type, - extract_ema=args.extract_ema, - scheduler_type=args.scheduler_type, - num_in_channels=args.num_in_channels, - model_channels=args.model_channels, - num_head_channels=args.num_head_channels, - from_safetensors=args.from_safetensors, - device=args.device, - ) - pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors) diff --git a/spaces/perilli/tortoise-tts-v2/setup.py b/spaces/perilli/tortoise-tts-v2/setup.py deleted file mode 100644 index 019e48d35f1e8747d7ea489f2d4911790255c225..0000000000000000000000000000000000000000 --- a/spaces/perilli/tortoise-tts-v2/setup.py +++ /dev/null @@ -1,35 +0,0 @@ -import setuptools - -with open("README.md", "r", encoding="utf-8") as fh: - long_description = fh.read() - -setuptools.setup( - name="TorToiSe", - packages=setuptools.find_packages(), - version="2.1.3", - author="James Betker", - author_email="james@adamant.ai", - description="A high quality multi-voice text-to-speech library", - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/neonbjb/tortoise-tts", - project_urls={}, - install_requires=[ - 'tqdm', - 'rotary_embedding_torch', - 'inflect', - 'progressbar', - 'einops', - 'unidecode', - 'scipy', - 'librosa', - 'transformers', - 'tokenizers', - ], - classifiers=[ - "Programming Language :: Python :: 3", - "License :: OSI Approved :: Apache Software License", - "Operating System :: OS Independent", - ], - python_requires=">=3.6", -) \ No newline at end of file diff --git a/spaces/pietrocagnasso/paper-highlights-extraction/Highlighter.py b/spaces/pietrocagnasso/paper-highlights-extraction/Highlighter.py deleted file mode 100644 index 676aee08126083b8515d8c3a4a8943c4a9ad718a..0000000000000000000000000000000000000000 --- a/spaces/pietrocagnasso/paper-highlights-extraction/Highlighter.py +++ /dev/null @@ -1,74 +0,0 @@ -import torch -import numpy as np - -from torch.utils.data import Dataset, DataLoader -from transformers import AutoModelForSequenceClassification, AutoTokenizer - - -class InternalDataset(Dataset): - def __init__(self, sentences, context, tokenizer): - self.x = sentences - self.context = context - self.tokenizer = tokenizer - - def __getitem__(self, index): - x = self.tokenizer.encode_plus(self.x[index], - self.context, - padding="max_length", - truncation=True, - max_length=384, - return_attention_mask=True, - return_tensors='pt') - - return {"input_ids": x["input_ids"][0], - "attention_mask": x["attention_mask"][0]} - - def __len__(self): - return len(self.x) - - -class Highlighter(): - def __init__(self, model_name="pietrocagnasso/thext-pce-cs"): - """ - Parameters: - model_name: string with the name of the model to be used, by default it is the model - trained on all the datasets - """ - - self.tokenizer = AutoTokenizer.from_pretrained(model_name) - self.device = "cuda" if torch.cuda.is_available() else "cpu" - self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(self.device) - - def get_highlights(self, - sentences, - context, - n_hl=3 - ): - """ - This methods is used to extract some highlights from a collection of sentences and given a - context. - - Parameters: - sentence: list of sentences to be evaluated - context: string containing the context - n_hl: number of highlights to extract - """ - if n_hl > len(sentences): - n_hl = len(sentences) - - ds = InternalDataset(sentences=sentences, - context=context, - tokenizer=self.tokenizer) - dl = DataLoader(ds, batch_size=4) - - res = [] - for data in dl: - ids, am = data["input_ids"], data["attention_mask"] - - out = self.model(input_ids=ids.to(self.device), attention_mask=am.to(self.device)) - - for t in out[0]: - res.append(t.item()) - - args = np.argsort(res)[::-1][:n_hl] - return [sentences[i] for i in args] \ No newline at end of file diff --git a/spaces/pkiage/fast_arbitrary_image_style_transfer/src/image_utils.py b/spaces/pkiage/fast_arbitrary_image_style_transfer/src/image_utils.py deleted file mode 100644 index aefea9e940dabe9925981e38e8492314c18c7953..0000000000000000000000000000000000000000 --- a/spaces/pkiage/fast_arbitrary_image_style_transfer/src/image_utils.py +++ /dev/null @@ -1,53 +0,0 @@ -import tensorflow as tf -import numpy as np -from PIL import Image - - -def load_img(path_to_img): - max_dim = 512 - img = tf.io.read_file(path_to_img) - img = tf.image.decode_image(img) - img = tf.image.convert_image_dtype(img, tf.float32) - - shape = tf.cast(tf.shape(img)[:-1], tf.float32) - long_dim = max(shape) - scale = max_dim / long_dim - - new_shape = tf.cast(shape * scale, tf.int32) - - img = tf.image.resize(img, new_shape) - img = img[tf.newaxis, :] - return img - - -def transform_img(img): - max_dim = 512 - img = tf.image.decode_image(img) - img = tf.image.convert_image_dtype(img, tf.float32) - - shape = tf.cast(tf.shape(img)[:-1], tf.float32) - long_dim = max(shape) - scale = max_dim / long_dim - - new_shape = tf.cast(shape * scale, tf.int32) - - img = tf.image.resize(img, new_shape) - img = img[tf.newaxis, :] - return img - - -def imshow(image, title=None): - if len(image.shape) > 3: - image = tf.squeeze(image, axis=0) - - image = np.squeeze(image) - return image - - -def tensor_to_image(tensor): - tensor = tensor * 255 - tensor = np.array(tensor, np.uint8) - if np.ndim(tensor) > 3: - assert tensor.shape[0] == 1 - tensor = tensor[0] - return Image.fromarray(tensor) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/legacy/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/legacy/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/parser/isoparser.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/parser/isoparser.py deleted file mode 100644 index 5d7bee38006d4e510b841d84df0322dee024b77c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/parser/isoparser.py +++ /dev/null @@ -1,416 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers a parser for ISO-8601 strings - -It is intended to support all valid date, time and datetime formats per the -ISO-8601 specification. - -..versionadded:: 2.7.0 -""" -from datetime import datetime, timedelta, time, date -import calendar -from dateutil import tz - -from functools import wraps - -import re -import six - -__all__ = ["isoparse", "isoparser"] - - -def _takes_ascii(f): - @wraps(f) - def func(self, str_in, *args, **kwargs): - # If it's a stream, read the whole thing - str_in = getattr(str_in, 'read', lambda: str_in)() - - # If it's unicode, turn it into bytes, since ISO-8601 only covers ASCII - if isinstance(str_in, six.text_type): - # ASCII is the same in UTF-8 - try: - str_in = str_in.encode('ascii') - except UnicodeEncodeError as e: - msg = 'ISO-8601 strings should contain only ASCII characters' - six.raise_from(ValueError(msg), e) - - return f(self, str_in, *args, **kwargs) - - return func - - -class isoparser(object): - def __init__(self, sep=None): - """ - :param sep: - A single character that separates date and time portions. If - ``None``, the parser will accept any single character. - For strict ISO-8601 adherence, pass ``'T'``. - """ - if sep is not None: - if (len(sep) != 1 or ord(sep) >= 128 or sep in '0123456789'): - raise ValueError('Separator must be a single, non-numeric ' + - 'ASCII character') - - sep = sep.encode('ascii') - - self._sep = sep - - @_takes_ascii - def isoparse(self, dt_str): - """ - Parse an ISO-8601 datetime string into a :class:`datetime.datetime`. - - An ISO-8601 datetime string consists of a date portion, followed - optionally by a time portion - the date and time portions are separated - by a single character separator, which is ``T`` in the official - standard. Incomplete date formats (such as ``YYYY-MM``) may *not* be - combined with a time portion. - - Supported date formats are: - - Common: - - - ``YYYY`` - - ``YYYY-MM`` or ``YYYYMM`` - - ``YYYY-MM-DD`` or ``YYYYMMDD`` - - Uncommon: - - - ``YYYY-Www`` or ``YYYYWww`` - ISO week (day defaults to 0) - - ``YYYY-Www-D`` or ``YYYYWwwD`` - ISO week and day - - The ISO week and day numbering follows the same logic as - :func:`datetime.date.isocalendar`. - - Supported time formats are: - - - ``hh`` - - ``hh:mm`` or ``hhmm`` - - ``hh:mm:ss`` or ``hhmmss`` - - ``hh:mm:ss.ssssss`` (Up to 6 sub-second digits) - - Midnight is a special case for `hh`, as the standard supports both - 00:00 and 24:00 as a representation. The decimal separator can be - either a dot or a comma. - - - .. caution:: - - Support for fractional components other than seconds is part of the - ISO-8601 standard, but is not currently implemented in this parser. - - Supported time zone offset formats are: - - - `Z` (UTC) - - `±HH:MM` - - `±HHMM` - - `±HH` - - Offsets will be represented as :class:`dateutil.tz.tzoffset` objects, - with the exception of UTC, which will be represented as - :class:`dateutil.tz.tzutc`. Time zone offsets equivalent to UTC (such - as `+00:00`) will also be represented as :class:`dateutil.tz.tzutc`. - - :param dt_str: - A string or stream containing only an ISO-8601 datetime string - - :return: - Returns a :class:`datetime.datetime` representing the string. - Unspecified components default to their lowest value. - - .. warning:: - - As of version 2.7.0, the strictness of the parser should not be - considered a stable part of the contract. Any valid ISO-8601 string - that parses correctly with the default settings will continue to - parse correctly in future versions, but invalid strings that - currently fail (e.g. ``2017-01-01T00:00+00:00:00``) are not - guaranteed to continue failing in future versions if they encode - a valid date. - - .. versionadded:: 2.7.0 - """ - components, pos = self._parse_isodate(dt_str) - - if len(dt_str) > pos: - if self._sep is None or dt_str[pos:pos + 1] == self._sep: - components += self._parse_isotime(dt_str[pos + 1:]) - else: - raise ValueError('String contains unknown ISO components') - - if len(components) > 3 and components[3] == 24: - components[3] = 0 - return datetime(*components) + timedelta(days=1) - - return datetime(*components) - - @_takes_ascii - def parse_isodate(self, datestr): - """ - Parse the date portion of an ISO string. - - :param datestr: - The string portion of an ISO string, without a separator - - :return: - Returns a :class:`datetime.date` object - """ - components, pos = self._parse_isodate(datestr) - if pos < len(datestr): - raise ValueError('String contains unknown ISO ' + - 'components: {!r}'.format(datestr.decode('ascii'))) - return date(*components) - - @_takes_ascii - def parse_isotime(self, timestr): - """ - Parse the time portion of an ISO string. - - :param timestr: - The time portion of an ISO string, without a separator - - :return: - Returns a :class:`datetime.time` object - """ - components = self._parse_isotime(timestr) - if components[0] == 24: - components[0] = 0 - return time(*components) - - @_takes_ascii - def parse_tzstr(self, tzstr, zero_as_utc=True): - """ - Parse a valid ISO time zone string. - - See :func:`isoparser.isoparse` for details on supported formats. - - :param tzstr: - A string representing an ISO time zone offset - - :param zero_as_utc: - Whether to return :class:`dateutil.tz.tzutc` for zero-offset zones - - :return: - Returns :class:`dateutil.tz.tzoffset` for offsets and - :class:`dateutil.tz.tzutc` for ``Z`` and (if ``zero_as_utc`` is - specified) offsets equivalent to UTC. - """ - return self._parse_tzstr(tzstr, zero_as_utc=zero_as_utc) - - # Constants - _DATE_SEP = b'-' - _TIME_SEP = b':' - _FRACTION_REGEX = re.compile(b'[\\.,]([0-9]+)') - - def _parse_isodate(self, dt_str): - try: - return self._parse_isodate_common(dt_str) - except ValueError: - return self._parse_isodate_uncommon(dt_str) - - def _parse_isodate_common(self, dt_str): - len_str = len(dt_str) - components = [1, 1, 1] - - if len_str < 4: - raise ValueError('ISO string too short') - - # Year - components[0] = int(dt_str[0:4]) - pos = 4 - if pos >= len_str: - return components, pos - - has_sep = dt_str[pos:pos + 1] == self._DATE_SEP - if has_sep: - pos += 1 - - # Month - if len_str - pos < 2: - raise ValueError('Invalid common month') - - components[1] = int(dt_str[pos:pos + 2]) - pos += 2 - - if pos >= len_str: - if has_sep: - return components, pos - else: - raise ValueError('Invalid ISO format') - - if has_sep: - if dt_str[pos:pos + 1] != self._DATE_SEP: - raise ValueError('Invalid separator in ISO string') - pos += 1 - - # Day - if len_str - pos < 2: - raise ValueError('Invalid common day') - components[2] = int(dt_str[pos:pos + 2]) - return components, pos + 2 - - def _parse_isodate_uncommon(self, dt_str): - if len(dt_str) < 4: - raise ValueError('ISO string too short') - - # All ISO formats start with the year - year = int(dt_str[0:4]) - - has_sep = dt_str[4:5] == self._DATE_SEP - - pos = 4 + has_sep # Skip '-' if it's there - if dt_str[pos:pos + 1] == b'W': - # YYYY-?Www-?D? - pos += 1 - weekno = int(dt_str[pos:pos + 2]) - pos += 2 - - dayno = 1 - if len(dt_str) > pos: - if (dt_str[pos:pos + 1] == self._DATE_SEP) != has_sep: - raise ValueError('Inconsistent use of dash separator') - - pos += has_sep - - dayno = int(dt_str[pos:pos + 1]) - pos += 1 - - base_date = self._calculate_weekdate(year, weekno, dayno) - else: - # YYYYDDD or YYYY-DDD - if len(dt_str) - pos < 3: - raise ValueError('Invalid ordinal day') - - ordinal_day = int(dt_str[pos:pos + 3]) - pos += 3 - - if ordinal_day < 1 or ordinal_day > (365 + calendar.isleap(year)): - raise ValueError('Invalid ordinal day' + - ' {} for year {}'.format(ordinal_day, year)) - - base_date = date(year, 1, 1) + timedelta(days=ordinal_day - 1) - - components = [base_date.year, base_date.month, base_date.day] - return components, pos - - def _calculate_weekdate(self, year, week, day): - """ - Calculate the day of corresponding to the ISO year-week-day calendar. - - This function is effectively the inverse of - :func:`datetime.date.isocalendar`. - - :param year: - The year in the ISO calendar - - :param week: - The week in the ISO calendar - range is [1, 53] - - :param day: - The day in the ISO calendar - range is [1 (MON), 7 (SUN)] - - :return: - Returns a :class:`datetime.date` - """ - if not 0 < week < 54: - raise ValueError('Invalid week: {}'.format(week)) - - if not 0 < day < 8: # Range is 1-7 - raise ValueError('Invalid weekday: {}'.format(day)) - - # Get week 1 for the specific year: - jan_4 = date(year, 1, 4) # Week 1 always has January 4th in it - week_1 = jan_4 - timedelta(days=jan_4.isocalendar()[2] - 1) - - # Now add the specific number of weeks and days to get what we want - week_offset = (week - 1) * 7 + (day - 1) - return week_1 + timedelta(days=week_offset) - - def _parse_isotime(self, timestr): - len_str = len(timestr) - components = [0, 0, 0, 0, None] - pos = 0 - comp = -1 - - if len_str < 2: - raise ValueError('ISO time too short') - - has_sep = False - - while pos < len_str and comp < 5: - comp += 1 - - if timestr[pos:pos + 1] in b'-+Zz': - # Detect time zone boundary - components[-1] = self._parse_tzstr(timestr[pos:]) - pos = len_str - break - - if comp == 1 and timestr[pos:pos+1] == self._TIME_SEP: - has_sep = True - pos += 1 - elif comp == 2 and has_sep: - if timestr[pos:pos+1] != self._TIME_SEP: - raise ValueError('Inconsistent use of colon separator') - pos += 1 - - if comp < 3: - # Hour, minute, second - components[comp] = int(timestr[pos:pos + 2]) - pos += 2 - - if comp == 3: - # Fraction of a second - frac = self._FRACTION_REGEX.match(timestr[pos:]) - if not frac: - continue - - us_str = frac.group(1)[:6] # Truncate to microseconds - components[comp] = int(us_str) * 10**(6 - len(us_str)) - pos += len(frac.group()) - - if pos < len_str: - raise ValueError('Unused components in ISO string') - - if components[0] == 24: - # Standard supports 00:00 and 24:00 as representations of midnight - if any(component != 0 for component in components[1:4]): - raise ValueError('Hour may only be 24 at 24:00:00.000') - - return components - - def _parse_tzstr(self, tzstr, zero_as_utc=True): - if tzstr == b'Z' or tzstr == b'z': - return tz.UTC - - if len(tzstr) not in {3, 5, 6}: - raise ValueError('Time zone offset must be 1, 3, 5 or 6 characters') - - if tzstr[0:1] == b'-': - mult = -1 - elif tzstr[0:1] == b'+': - mult = 1 - else: - raise ValueError('Time zone offset requires sign') - - hours = int(tzstr[1:3]) - if len(tzstr) == 3: - minutes = 0 - else: - minutes = int(tzstr[(4 if tzstr[3:4] == self._TIME_SEP else 3):]) - - if zero_as_utc and hours == 0 and minutes == 0: - return tz.UTC - else: - if minutes > 59: - raise ValueError('Invalid minutes in time zone offset') - - if hours > 23: - raise ValueError('Invalid hours in time zone offset') - - return tz.tzoffset(None, mult * (hours * 60 + minutes) * 60) - - -DEFAULT_ISOPARSER = isoparser() -isoparse = DEFAULT_ISOPARSER.isoparse diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py deleted file mode 100644 index 29c802bcc83b3ca35bbd0e6521f47a368b5f9092..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py +++ /dev/null @@ -1,5 +0,0 @@ -import sys -from fontTools.mtiLib import main - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/webworker/awaitable-queue.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/webworker/awaitable-queue.ts deleted file mode 100644 index 18abae958a39cddec3aae94a1fd4b68dd9ee6a85..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/webworker/awaitable-queue.ts +++ /dev/null @@ -1,66 +0,0 @@ -// Copied from https://github.com/rstudio/shinylive/blob/v0.1.4/src/awaitable-queue.ts -// and modified. -/* -MIT License - -Copyright (c) 2022 RStudio, PBC - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. -*/ - -// A queue with an async dequeue operation -export class AwaitableQueue { - _buffer: T[] = []; - _promise: Promise; - _resolve: () => void; - - constructor() { - // make TS compiler happy - this._resolve = null as any; - this._promise = null as any; - - // Actually initialize _promise and _resolve - this._notifyAll(); - } - - private async _wait(): Promise { - await this._promise; - } - - private _notifyAll(): void { - if (this._resolve) { - this._resolve(); - } - this._promise = new Promise((resolve) => (this._resolve = resolve)); - } - - public async dequeue(): Promise { - // Must use a while-loop here, there might be multiple callers waiting to - // deqeueue simultaneously - while (this._buffer.length === 0) { - await this._wait(); - } - return this._buffer.shift()!; - } - - public enqueue(x: T): void { - this._buffer.push(x); - this._notifyAll(); - } -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/repocard.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/repocard.py deleted file mode 100644 index f07366006953dea2fca67ef360d2524ec04be653..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/repocard.py +++ /dev/null @@ -1,820 +0,0 @@ -import os -import re -from pathlib import Path -from typing import Any, Dict, Literal, Optional, Type, Union - -import requests -import yaml - -from huggingface_hub.file_download import hf_hub_download -from huggingface_hub.hf_api import upload_file -from huggingface_hub.repocard_data import ( - CardData, - DatasetCardData, - EvalResult, - ModelCardData, - SpaceCardData, - eval_results_to_model_index, - model_index_to_eval_results, -) -from huggingface_hub.utils import get_session, is_jinja_available, yaml_dump - -from .constants import REPOCARD_NAME -from .utils import EntryNotFoundError, SoftTemporaryDirectory, validate_hf_hub_args -from .utils.logging import get_logger - - -TEMPLATE_MODELCARD_PATH = Path(__file__).parent / "templates" / "modelcard_template.md" -TEMPLATE_DATASETCARD_PATH = Path(__file__).parent / "templates" / "datasetcard_template.md" - -# exact same regex as in the Hub server. Please keep in sync. -# See https://github.com/huggingface/moon-landing/blob/main/server/lib/ViewMarkdown.ts#L18 -REGEX_YAML_BLOCK = re.compile(r"^(\s*---[\r\n]+)([\S\s]*?)([\r\n]+---(\r\n|\n|$))") - -logger = get_logger(__name__) - - -class RepoCard: - card_data_class = CardData - default_template_path = TEMPLATE_MODELCARD_PATH - repo_type = "model" - - def __init__(self, content: str, ignore_metadata_errors: bool = False): - """Initialize a RepoCard from string content. The content should be a - Markdown file with a YAML block at the beginning and a Markdown body. - - Args: - content (`str`): The content of the Markdown file. - - Example: - ```python - >>> from huggingface_hub.repocard import RepoCard - >>> text = ''' - ... --- - ... language: en - ... license: mit - ... --- - ... - ... # My repo - ... ''' - >>> card = RepoCard(text) - >>> card.data.to_dict() - {'language': 'en', 'license': 'mit'} - >>> card.text - '\\n# My repo\\n' - - ``` - - Raises the following error: - - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - when the content of the repo card metadata is not a dictionary. - - - """ - - # Set the content of the RepoCard, as well as underlying .data and .text attributes. - # See the `content` property setter for more details. - self.ignore_metadata_errors = ignore_metadata_errors - self.content = content - - @property - def content(self): - """The content of the RepoCard, including the YAML block and the Markdown body.""" - line_break = _detect_line_ending(self._content) or "\n" - return f"---{line_break}{self.data.to_yaml(line_break=line_break)}{line_break}---{line_break}{self.text}" - - @content.setter - def content(self, content: str): - """Set the content of the RepoCard.""" - self._content = content - - match = REGEX_YAML_BLOCK.search(content) - if match: - # Metadata found in the YAML block - yaml_block = match.group(2) - self.text = content[match.end() :] - data_dict = yaml.safe_load(yaml_block) - - if data_dict is None: - data_dict = {} - - # The YAML block's data should be a dictionary - if not isinstance(data_dict, dict): - raise ValueError("repo card metadata block should be a dict") - else: - # Model card without metadata... create empty metadata - logger.warning("Repo card metadata block was not found. Setting CardData to empty.") - data_dict = {} - self.text = content - - self.data = self.card_data_class(**data_dict, ignore_metadata_errors=self.ignore_metadata_errors) - - def __str__(self): - return self.content - - def save(self, filepath: Union[Path, str]): - r"""Save a RepoCard to a file. - - Args: - filepath (`Union[Path, str]`): Filepath to the markdown file to save. - - Example: - ```python - >>> from huggingface_hub.repocard import RepoCard - >>> card = RepoCard("---\nlanguage: en\n---\n# This is a test repo card") - >>> card.save("/tmp/test.md") - - ``` - """ - filepath = Path(filepath) - filepath.parent.mkdir(parents=True, exist_ok=True) - # Preserve newlines as in the existing file. - with open(filepath, mode="w", newline="", encoding="utf-8") as f: - f.write(str(self)) - - @classmethod - def load( - cls, - repo_id_or_path: Union[str, Path], - repo_type: Optional[str] = None, - token: Optional[str] = None, - ignore_metadata_errors: bool = False, - ): - """Initialize a RepoCard from a Hugging Face Hub repo's README.md or a local filepath. - - Args: - repo_id_or_path (`Union[str, Path]`): - The repo ID associated with a Hugging Face Hub repo or a local filepath. - repo_type (`str`, *optional*): - The type of Hugging Face repo to push to. Defaults to None, which will use use "model". Other options - are "dataset" and "space". Not used when loading from a local filepath. If this is called from a child - class, the default value will be the child class's `repo_type`. - token (`str`, *optional*): - Authentication token, obtained with `huggingface_hub.HfApi.login` method. Will default to the stored token. - ignore_metadata_errors (`str`): - If True, errors while parsing the metadata section will be ignored. Some information might be lost during - the process. Use it at your own risk. - - Returns: - [`huggingface_hub.repocard.RepoCard`]: The RepoCard (or subclass) initialized from the repo's - README.md file or filepath. - - Example: - ```python - >>> from huggingface_hub.repocard import RepoCard - >>> card = RepoCard.load("nateraw/food") - >>> assert card.data.tags == ["generated_from_trainer", "image-classification", "pytorch"] - - ``` - """ - - if Path(repo_id_or_path).exists(): - card_path = Path(repo_id_or_path) - elif isinstance(repo_id_or_path, str): - card_path = Path( - hf_hub_download( - repo_id_or_path, - REPOCARD_NAME, - repo_type=repo_type or cls.repo_type, - token=token, - ) - ) - else: - raise ValueError(f"Cannot load RepoCard: path not found on disk ({repo_id_or_path}).") - - # Preserve newlines in the existing file. - with card_path.open(mode="r", newline="", encoding="utf-8") as f: - return cls(f.read(), ignore_metadata_errors=ignore_metadata_errors) - - def validate(self, repo_type: Optional[str] = None): - """Validates card against Hugging Face Hub's card validation logic. - Using this function requires access to the internet, so it is only called - internally by [`huggingface_hub.repocard.RepoCard.push_to_hub`]. - - Args: - repo_type (`str`, *optional*, defaults to "model"): - The type of Hugging Face repo to push to. Options are "model", "dataset", and "space". - If this function is called from a child class, the default will be the child class's `repo_type`. - - - Raises the following errors: - - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if the card fails validation checks. - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the request to the Hub API fails for any other reason. - - - """ - - # If repo type is provided, otherwise, use the repo type of the card. - repo_type = repo_type or self.repo_type - - body = { - "repoType": repo_type, - "content": str(self), - } - headers = {"Accept": "text/plain"} - - try: - r = get_session().post("https://huggingface.co/api/validate-yaml", body, headers=headers) - r.raise_for_status() - except requests.exceptions.HTTPError as exc: - if r.status_code == 400: - raise ValueError(r.text) - else: - raise exc - - def push_to_hub( - self, - repo_id: str, - token: Optional[str] = None, - repo_type: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - revision: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - ): - """Push a RepoCard to a Hugging Face Hub repo. - - Args: - repo_id (`str`): - The repo ID of the Hugging Face Hub repo to push to. Example: "nateraw/food". - token (`str`, *optional*): - Authentication token, obtained with `huggingface_hub.HfApi.login` method. Will default to - the stored token. - repo_type (`str`, *optional*, defaults to "model"): - The type of Hugging Face repo to push to. Options are "model", "dataset", and "space". If this - function is called by a child class, it will default to the child class's `repo_type`. - commit_message (`str`, *optional*): - The summary / title / first line of the generated commit. - commit_description (`str`, *optional*) - The description of the generated commit. - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the `"main"` branch. - create_pr (`bool`, *optional*): - Whether or not to create a Pull Request with this commit. Defaults to `False`. - parent_commit (`str`, *optional*): - The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. - If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`. - If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`. - Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be - especially useful if the repo is updated / committed to concurrently. - Returns: - `str`: URL of the commit which updated the card metadata. - """ - - # If repo type is provided, otherwise, use the repo type of the card. - repo_type = repo_type or self.repo_type - - # Validate card before pushing to hub - self.validate(repo_type=repo_type) - - with SoftTemporaryDirectory() as tmpdir: - tmp_path = Path(tmpdir) / REPOCARD_NAME - tmp_path.write_text(str(self)) - url = upload_file( - path_or_fileobj=str(tmp_path), - path_in_repo=REPOCARD_NAME, - repo_id=repo_id, - token=token, - repo_type=repo_type, - commit_message=commit_message, - commit_description=commit_description, - create_pr=create_pr, - revision=revision, - parent_commit=parent_commit, - ) - return url - - @classmethod - def from_template( - cls, - card_data: CardData, - template_path: Optional[str] = None, - **template_kwargs, - ): - """Initialize a RepoCard from a template. By default, it uses the default template. - - Templates are Jinja2 templates that can be customized by passing keyword arguments. - - Args: - card_data (`huggingface_hub.CardData`): - A huggingface_hub.CardData instance containing the metadata you want to include in the YAML - header of the repo card on the Hugging Face Hub. - template_path (`str`, *optional*): - A path to a markdown file with optional Jinja template variables that can be filled - in with `template_kwargs`. Defaults to the default template. - - Returns: - [`huggingface_hub.repocard.RepoCard`]: A RepoCard instance with the specified card data and content from the - template. - """ - if is_jinja_available(): - import jinja2 - else: - raise ImportError( - "Using RepoCard.from_template requires Jinja2 to be installed. Please" - " install it with `pip install Jinja2`." - ) - - kwargs = card_data.to_dict().copy() - kwargs.update(template_kwargs) # Template_kwargs have priority - template = jinja2.Template(Path(template_path or cls.default_template_path).read_text()) - content = template.render(card_data=card_data.to_yaml(), **kwargs) - return cls(content) - - -class ModelCard(RepoCard): - card_data_class = ModelCardData - default_template_path = TEMPLATE_MODELCARD_PATH - repo_type = "model" - - @classmethod - def from_template( # type: ignore # violates Liskov property but easier to use - cls, - card_data: ModelCardData, - template_path: Optional[str] = None, - **template_kwargs, - ): - """Initialize a ModelCard from a template. By default, it uses the default template, which can be found here: - https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md - - Templates are Jinja2 templates that can be customized by passing keyword arguments. - - Args: - card_data (`huggingface_hub.ModelCardData`): - A huggingface_hub.ModelCardData instance containing the metadata you want to include in the YAML - header of the model card on the Hugging Face Hub. - template_path (`str`, *optional*): - A path to a markdown file with optional Jinja template variables that can be filled - in with `template_kwargs`. Defaults to the default template. - - Returns: - [`huggingface_hub.ModelCard`]: A ModelCard instance with the specified card data and content from the - template. - - Example: - ```python - >>> from huggingface_hub import ModelCard, ModelCardData, EvalResult - - >>> # Using the Default Template - >>> card_data = ModelCardData( - ... language='en', - ... license='mit', - ... library_name='timm', - ... tags=['image-classification', 'resnet'], - ... datasets=['beans'], - ... metrics=['accuracy'], - ... ) - >>> card = ModelCard.from_template( - ... card_data, - ... model_description='This model does x + y...' - ... ) - - >>> # Including Evaluation Results - >>> card_data = ModelCardData( - ... language='en', - ... tags=['image-classification', 'resnet'], - ... eval_results=[ - ... EvalResult( - ... task_type='image-classification', - ... dataset_type='beans', - ... dataset_name='Beans', - ... metric_type='accuracy', - ... metric_value=0.9, - ... ), - ... ], - ... model_name='my-cool-model', - ... ) - >>> card = ModelCard.from_template(card_data) - - >>> # Using a Custom Template - >>> card_data = ModelCardData( - ... language='en', - ... tags=['image-classification', 'resnet'] - ... ) - >>> card = ModelCard.from_template( - ... card_data=card_data, - ... template_path='./src/huggingface_hub/templates/modelcard_template.md', - ... custom_template_var='custom value', # will be replaced in template if it exists - ... ) - - ``` - """ - return super().from_template(card_data, template_path, **template_kwargs) - - -class DatasetCard(RepoCard): - card_data_class = DatasetCardData - default_template_path = TEMPLATE_DATASETCARD_PATH - repo_type = "dataset" - - @classmethod - def from_template( # type: ignore # violates Liskov property but easier to use - cls, - card_data: DatasetCardData, - template_path: Optional[str] = None, - **template_kwargs, - ): - """Initialize a DatasetCard from a template. By default, it uses the default template, which can be found here: - https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md - - Templates are Jinja2 templates that can be customized by passing keyword arguments. - - Args: - card_data (`huggingface_hub.DatasetCardData`): - A huggingface_hub.DatasetCardData instance containing the metadata you want to include in the YAML - header of the dataset card on the Hugging Face Hub. - template_path (`str`, *optional*): - A path to a markdown file with optional Jinja template variables that can be filled - in with `template_kwargs`. Defaults to the default template. - - Returns: - [`huggingface_hub.DatasetCard`]: A DatasetCard instance with the specified card data and content from the - template. - - Example: - ```python - >>> from huggingface_hub import DatasetCard, DatasetCardData - - >>> # Using the Default Template - >>> card_data = DatasetCardData( - ... language='en', - ... license='mit', - ... annotations_creators='crowdsourced', - ... task_categories=['text-classification'], - ... task_ids=['sentiment-classification', 'text-scoring'], - ... multilinguality='monolingual', - ... pretty_name='My Text Classification Dataset', - ... ) - >>> card = DatasetCard.from_template( - ... card_data, - ... pretty_name=card_data.pretty_name, - ... ) - - >>> # Using a Custom Template - >>> card_data = DatasetCardData( - ... language='en', - ... license='mit', - ... ) - >>> card = DatasetCard.from_template( - ... card_data=card_data, - ... template_path='./src/huggingface_hub/templates/datasetcard_template.md', - ... custom_template_var='custom value', # will be replaced in template if it exists - ... ) - - ``` - """ - return super().from_template(card_data, template_path, **template_kwargs) - - -class SpaceCard(RepoCard): - card_data_class = SpaceCardData - default_template_path = TEMPLATE_MODELCARD_PATH - repo_type = "space" - - -def _detect_line_ending(content: str) -> Literal["\r", "\n", "\r\n", None]: # noqa: F722 - """Detect the line ending of a string. Used by RepoCard to avoid making huge diff on newlines. - - Uses same implementation as in Hub server, keep it in sync. - - Returns: - str: The detected line ending of the string. - """ - cr = content.count("\r") - lf = content.count("\n") - crlf = content.count("\r\n") - if cr + lf == 0: - return None - if crlf == cr and crlf == lf: - return "\r\n" - if cr > lf: - return "\r" - else: - return "\n" - - -def metadata_load(local_path: Union[str, Path]) -> Optional[Dict]: - content = Path(local_path).read_text() - match = REGEX_YAML_BLOCK.search(content) - if match: - yaml_block = match.group(2) - data = yaml.safe_load(yaml_block) - if data is None or isinstance(data, dict): - return data - raise ValueError("repo card metadata block should be a dict") - else: - return None - - -def metadata_save(local_path: Union[str, Path], data: Dict) -> None: - """ - Save the metadata dict in the upper YAML part Trying to preserve newlines as - in the existing file. Docs about open() with newline="" parameter: - https://docs.python.org/3/library/functions.html?highlight=open#open Does - not work with "^M" linebreaks, which are replaced by \n - """ - line_break = "\n" - content = "" - # try to detect existing newline character - if os.path.exists(local_path): - with open(local_path, "r", newline="", encoding="utf8") as readme: - content = readme.read() - if isinstance(readme.newlines, tuple): - line_break = readme.newlines[0] - elif isinstance(readme.newlines, str): - line_break = readme.newlines - - # creates a new file if it not - with open(local_path, "w", newline="", encoding="utf8") as readme: - data_yaml = yaml_dump(data, sort_keys=False, line_break=line_break) - # sort_keys: keep dict order - match = REGEX_YAML_BLOCK.search(content) - if match: - output = content[: match.start()] + f"---{line_break}{data_yaml}---{line_break}" + content[match.end() :] - else: - output = f"---{line_break}{data_yaml}---{line_break}{content}" - - readme.write(output) - readme.close() - - -def metadata_eval_result( - *, - model_pretty_name: str, - task_pretty_name: str, - task_id: str, - metrics_pretty_name: str, - metrics_id: str, - metrics_value: Any, - dataset_pretty_name: str, - dataset_id: str, - metrics_config: Optional[str] = None, - metrics_verified: bool = False, - dataset_config: Optional[str] = None, - dataset_split: Optional[str] = None, - dataset_revision: Optional[str] = None, - metrics_verification_token: Optional[str] = None, -) -> Dict: - """ - Creates a metadata dict with the result from a model evaluated on a dataset. - - Args: - model_pretty_name (`str`): - The name of the model in natural language. - task_pretty_name (`str`): - The name of a task in natural language. - task_id (`str`): - Example: automatic-speech-recognition. A task id. - metrics_pretty_name (`str`): - A name for the metric in natural language. Example: Test WER. - metrics_id (`str`): - Example: wer. A metric id from https://hf.co/metrics. - metrics_value (`Any`): - The value from the metric. Example: 20.0 or "20.0 ± 1.2". - dataset_pretty_name (`str`): - The name of the dataset in natural language. - dataset_id (`str`): - Example: common_voice. A dataset id from https://hf.co/datasets. - metrics_config (`str`, *optional*): - The name of the metric configuration used in `load_metric()`. - Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`. - metrics_verified (`bool`, *optional*, defaults to `False`): - Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set. - dataset_config (`str`, *optional*): - Example: fr. The name of the dataset configuration used in `load_dataset()`. - dataset_split (`str`, *optional*): - Example: test. The name of the dataset split used in `load_dataset()`. - dataset_revision (`str`, *optional*): - Example: 5503434ddd753f426f4b38109466949a1217c2bb. The name of the dataset dataset revision - used in `load_dataset()`. - metrics_verification_token (`bool`, *optional*): - A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. - - Returns: - `dict`: a metadata dict with the result from a model evaluated on a dataset. - - Example: - ```python - >>> from huggingface_hub import metadata_eval_result - >>> results = metadata_eval_result( - ... model_pretty_name="RoBERTa fine-tuned on ReactionGIF", - ... task_pretty_name="Text Classification", - ... task_id="text-classification", - ... metrics_pretty_name="Accuracy", - ... metrics_id="accuracy", - ... metrics_value=0.2662102282047272, - ... dataset_pretty_name="ReactionJPEG", - ... dataset_id="julien-c/reactionjpeg", - ... dataset_config="default", - ... dataset_split="test", - ... ) - >>> results == { - ... 'model-index': [ - ... { - ... 'name': 'RoBERTa fine-tuned on ReactionGIF', - ... 'results': [ - ... { - ... 'task': { - ... 'type': 'text-classification', - ... 'name': 'Text Classification' - ... }, - ... 'dataset': { - ... 'name': 'ReactionJPEG', - ... 'type': 'julien-c/reactionjpeg', - ... 'config': 'default', - ... 'split': 'test' - ... }, - ... 'metrics': [ - ... { - ... 'type': 'accuracy', - ... 'value': 0.2662102282047272, - ... 'name': 'Accuracy', - ... 'verified': False - ... } - ... ] - ... } - ... ] - ... } - ... ] - ... } - True - - ``` - """ - - return { - "model-index": eval_results_to_model_index( - model_name=model_pretty_name, - eval_results=[ - EvalResult( - task_name=task_pretty_name, - task_type=task_id, - metric_name=metrics_pretty_name, - metric_type=metrics_id, - metric_value=metrics_value, - dataset_name=dataset_pretty_name, - dataset_type=dataset_id, - metric_config=metrics_config, - verified=metrics_verified, - verify_token=metrics_verification_token, - dataset_config=dataset_config, - dataset_split=dataset_split, - dataset_revision=dataset_revision, - ) - ], - ) - } - - -@validate_hf_hub_args -def metadata_update( - repo_id: str, - metadata: Dict, - *, - repo_type: Optional[str] = None, - overwrite: bool = False, - token: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - revision: Optional[str] = None, - create_pr: bool = False, - parent_commit: Optional[str] = None, -) -> str: - """ - Updates the metadata in the README.md of a repository on the Hugging Face Hub. - If the README.md file doesn't exist yet, a new one is created with metadata and an - the default ModelCard or DatasetCard template. For `space` repo, an error is thrown - as a Space cannot exist without a `README.md` file. - - Args: - repo_id (`str`): - The name of the repository. - metadata (`dict`): - A dictionary containing the metadata to be updated. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if updating to a dataset or space, - `None` or `"model"` if updating to a model. Default is `None`. - overwrite (`bool`, *optional*, defaults to `False`): - If set to `True` an existing field can be overwritten, otherwise - attempting to overwrite an existing field will cause an error. - token (`str`, *optional*): - The Hugging Face authentication token. - commit_message (`str`, *optional*): - The summary / title / first line of the generated commit. Defaults to - `f"Update metadata with huggingface_hub"` - commit_description (`str` *optional*) - The description of the generated commit - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the - `"main"` branch. - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request from `revision` with that commit. - Defaults to `False`. - parent_commit (`str`, *optional*): - The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. - If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`. - If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`. - Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be - especially useful if the repo is updated / committed to concurrently. - Returns: - `str`: URL of the commit which updated the card metadata. - - Example: - ```python - >>> from huggingface_hub import metadata_update - >>> metadata = {'model-index': [{'name': 'RoBERTa fine-tuned on ReactionGIF', - ... 'results': [{'dataset': {'name': 'ReactionGIF', - ... 'type': 'julien-c/reactiongif'}, - ... 'metrics': [{'name': 'Recall', - ... 'type': 'recall', - ... 'value': 0.7762102282047272}], - ... 'task': {'name': 'Text Classification', - ... 'type': 'text-classification'}}]}]} - >>> url = metadata_update("hf-internal-testing/reactiongif-roberta-card", metadata) - - ``` - """ - commit_message = commit_message if commit_message is not None else "Update metadata with huggingface_hub" - - # Card class given repo_type - card_class: Type[RepoCard] - if repo_type is None or repo_type == "model": - card_class = ModelCard - elif repo_type == "dataset": - card_class = DatasetCard - elif repo_type == "space": - card_class = RepoCard - else: - raise ValueError(f"Unknown repo_type: {repo_type}") - - # Either load repo_card from the Hub or create an empty one. - # NOTE: Will not create the repo if it doesn't exist. - try: - card = card_class.load(repo_id, token=token, repo_type=repo_type) - except EntryNotFoundError: - if repo_type == "space": - raise ValueError("Cannot update metadata on a Space that doesn't contain a `README.md` file.") - - # Initialize a ModelCard or DatasetCard from default template and no data. - card = card_class.from_template(CardData()) - - for key, value in metadata.items(): - if key == "model-index": - # if the new metadata doesn't include a name, either use existing one or repo name - if "name" not in value[0]: - value[0]["name"] = getattr(card, "model_name", repo_id) - model_name, new_results = model_index_to_eval_results(value) - if card.data.eval_results is None: - card.data.eval_results = new_results - card.data.model_name = model_name - else: - existing_results = card.data.eval_results - - # Iterate over new results - # Iterate over existing results - # If both results describe the same metric but value is different: - # If overwrite=True: overwrite the metric value - # Else: raise ValueError - # Else: append new result to existing ones. - for new_result in new_results: - result_found = False - for existing_result in existing_results: - if new_result.is_equal_except_value(existing_result): - if new_result != existing_result and not overwrite: - raise ValueError( - "You passed a new value for the existing metric" - f" 'name: {new_result.metric_name}, type: " - f"{new_result.metric_type}'. Set `overwrite=True`" - " to overwrite existing metrics." - ) - result_found = True - existing_result.metric_value = new_result.metric_value - if existing_result.verified is True: - existing_result.verify_token = new_result.verify_token - if not result_found: - card.data.eval_results.append(new_result) - else: - # Any metadata that is not a result metric - if card.data.get(key) is not None and not overwrite and card.data.get(key) != value: - raise ValueError( - f"You passed a new value for the existing meta data field '{key}'." - " Set `overwrite=True` to overwrite existing metadata." - ) - else: - card.data[key] = value - - return card.push_to_hub( - repo_id, - token=token, - repo_type=repo_type, - commit_message=commit_message, - commit_description=commit_description, - create_pr=create_pr, - revision=revision, - parent_commit=parent_commit, - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_macosx.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_macosx.py deleted file mode 100644 index a39f5b5b14978ee7b44a6b35cec227bf18a3cc9c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_macosx.py +++ /dev/null @@ -1,236 +0,0 @@ -import contextlib -import os -import signal -import socket - -import matplotlib as mpl -from matplotlib import _api, cbook -from matplotlib._pylab_helpers import Gcf -from . import _macosx -from .backend_agg import FigureCanvasAgg -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, NavigationToolbar2, - ResizeEvent, TimerBase) - - -class TimerMac(_macosx.Timer, TimerBase): - """Subclass of `.TimerBase` using CFRunLoop timer events.""" - # completely implemented at the C-level (in _macosx.Timer) - - -class FigureCanvasMac(FigureCanvasAgg, _macosx.FigureCanvas, FigureCanvasBase): - # docstring inherited - - # Ideally this class would be `class FCMacAgg(FCAgg, FCMac)` - # (FC=FigureCanvas) where FCMac would be an ObjC-implemented mac-specific - # class also inheriting from FCBase (this is the approach with other GUI - # toolkits). However, writing an extension type inheriting from a Python - # base class is slightly tricky (the extension type must be a heap type), - # and we can just as well lift the FCBase base up one level, keeping it *at - # the end* to have the right method resolution order. - - # Events such as button presses, mouse movements, and key presses are - # handled in C and events (MouseEvent, etc.) are triggered from there. - - required_interactive_framework = "macosx" - _timer_cls = TimerMac - manager_class = _api.classproperty(lambda cls: FigureManagerMac) - - def __init__(self, figure): - super().__init__(figure=figure) - self._draw_pending = False - self._is_drawing = False - # Keep track of the timers that are alive - self._timers = set() - - def draw(self): - """Render the figure and update the macosx canvas.""" - # The renderer draw is done here; delaying causes problems with code - # that uses the result of the draw() to update plot elements. - if self._is_drawing: - return - with cbook._setattr_cm(self, _is_drawing=True): - super().draw() - self.update() - - def draw_idle(self): - # docstring inherited - if not (getattr(self, '_draw_pending', False) or - getattr(self, '_is_drawing', False)): - self._draw_pending = True - # Add a singleshot timer to the eventloop that will call back - # into the Python method _draw_idle to take care of the draw - self._single_shot_timer(self._draw_idle) - - def _single_shot_timer(self, callback): - """Add a single shot timer with the given callback""" - # We need to explicitly stop and remove the timer after - # firing, otherwise segfaults will occur when trying to deallocate - # the singleshot timers. - def callback_func(callback, timer): - callback() - self._timers.remove(timer) - timer.stop() - timer = self.new_timer(interval=0) - timer.single_shot = True - timer.add_callback(callback_func, callback, timer) - self._timers.add(timer) - timer.start() - - def _draw_idle(self): - """ - Draw method for singleshot timer - - This draw method can be added to a singleshot timer, which can - accumulate draws while the eventloop is spinning. This method will - then only draw the first time and short-circuit the others. - """ - with self._idle_draw_cntx(): - if not self._draw_pending: - # Short-circuit because our draw request has already been - # taken care of - return - self._draw_pending = False - self.draw() - - def blit(self, bbox=None): - # docstring inherited - super().blit(bbox) - self.update() - - def resize(self, width, height): - # Size from macOS is logical pixels, dpi is physical. - scale = self.figure.dpi / self.device_pixel_ratio - width /= scale - height /= scale - self.figure.set_size_inches(width, height, forward=False) - ResizeEvent("resize_event", self)._process() - self.draw_idle() - - def start_event_loop(self, timeout=0): - # docstring inherited - with _maybe_allow_interrupt(): - # Call the objc implementation of the event loop after - # setting up the interrupt handling - self._start_event_loop(timeout=timeout) - - -class NavigationToolbar2Mac(_macosx.NavigationToolbar2, NavigationToolbar2): - - def __init__(self, canvas): - data_path = cbook._get_data_path('images') - _, tooltips, image_names, _ = zip(*NavigationToolbar2.toolitems) - _macosx.NavigationToolbar2.__init__( - self, canvas, - tuple(str(data_path / image_name) + ".pdf" - for image_name in image_names if image_name is not None), - tuple(tooltip for tooltip in tooltips if tooltip is not None)) - NavigationToolbar2.__init__(self, canvas) - - def draw_rubberband(self, event, x0, y0, x1, y1): - self.canvas.set_rubberband(int(x0), int(y0), int(x1), int(y1)) - - def remove_rubberband(self): - self.canvas.remove_rubberband() - - def save_figure(self, *args): - directory = os.path.expanduser(mpl.rcParams['savefig.directory']) - filename = _macosx.choose_save_file('Save the figure', - directory, - self.canvas.get_default_filename()) - if filename is None: # Cancel - return - # Save dir for next time, unless empty str (which means use cwd). - if mpl.rcParams['savefig.directory']: - mpl.rcParams['savefig.directory'] = os.path.dirname(filename) - self.canvas.figure.savefig(filename) - - -class FigureManagerMac(_macosx.FigureManager, FigureManagerBase): - _toolbar2_class = NavigationToolbar2Mac - - def __init__(self, canvas, num): - self._shown = False - _macosx.FigureManager.__init__(self, canvas) - icon_path = str(cbook._get_data_path('images/matplotlib.pdf')) - _macosx.FigureManager.set_icon(icon_path) - FigureManagerBase.__init__(self, canvas, num) - self._set_window_mode(mpl.rcParams["macosx.window_mode"]) - if self.toolbar is not None: - self.toolbar.update() - if mpl.is_interactive(): - self.show() - self.canvas.draw_idle() - - def _close_button_pressed(self): - Gcf.destroy(self) - self.canvas.flush_events() - - def destroy(self): - # We need to clear any pending timers that never fired, otherwise - # we get a memory leak from the timer callbacks holding a reference - while self.canvas._timers: - timer = self.canvas._timers.pop() - timer.stop() - super().destroy() - - @classmethod - def start_main_loop(cls): - # Set up a SIGINT handler to allow terminating a plot via CTRL-C. - # The logic is largely copied from qt_compat._maybe_allow_interrupt; see its - # docstring for details. Parts are implemented by wake_on_fd_write in ObjC. - with _maybe_allow_interrupt(): - _macosx.show() - - def show(self): - if not self._shown: - self._show() - self._shown = True - if mpl.rcParams["figure.raise_window"]: - self._raise() - - -@contextlib.contextmanager -def _maybe_allow_interrupt(): - """ - This manager allows to terminate a plot by sending a SIGINT. It is - necessary because the running backend prevents Python interpreter to - run and process signals (i.e., to raise KeyboardInterrupt exception). To - solve this one needs to somehow wake up the interpreter and make it close - the plot window. The implementation is taken from qt_compat, see that - docstring for a more detailed description. - """ - old_sigint_handler = signal.getsignal(signal.SIGINT) - if old_sigint_handler in (None, signal.SIG_IGN, signal.SIG_DFL): - yield - return - - handler_args = None - wsock, rsock = socket.socketpair() - wsock.setblocking(False) - rsock.setblocking(False) - old_wakeup_fd = signal.set_wakeup_fd(wsock.fileno()) - _macosx.wake_on_fd_write(rsock.fileno()) - - def handle(*args): - nonlocal handler_args - handler_args = args - _macosx.stop() - - signal.signal(signal.SIGINT, handle) - try: - yield - finally: - wsock.close() - rsock.close() - signal.set_wakeup_fd(old_wakeup_fd) - signal.signal(signal.SIGINT, old_sigint_handler) - if handler_args is not None: - old_sigint_handler(*handler_args) - - -@_Backend.export -class _BackendMac(_Backend): - FigureCanvas = FigureCanvasMac - FigureManager = FigureManagerMac - mainloop = FigureManagerMac.start_main_loop diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tri/tricontour.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tri/tricontour.py deleted file mode 100644 index 37406451d376d2f608599cfeae865e92b0631ca0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tri/tricontour.py +++ /dev/null @@ -1,9 +0,0 @@ -from ._tricontour import * # noqa: F401, F403 -from matplotlib import _api - - -_api.warn_deprecated( - "3.7", - message=f"Importing {__name__} was deprecated in Matplotlib 3.7 and will " - f"be removed two minor releases later. All functionality is " - f"available via the top-level module matplotlib.tri") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/conftest.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/conftest.py deleted file mode 100644 index 61c2de3e07bac4db323f8704961264d123e01544..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/conftest.py +++ /dev/null @@ -1,2 +0,0 @@ -from matplotlib.testing.conftest import (mpl_test_settings, # noqa - pytest_configure, pytest_unconfigure) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/_dtype_api.h b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/_dtype_api.h deleted file mode 100644 index 39fbc500b0f2e04bbd5ff1f2d194375ebc6b511b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/_dtype_api.h +++ /dev/null @@ -1,408 +0,0 @@ -/* - * DType related API shared by the (experimental) public API And internal API. - */ - -#ifndef NUMPY_CORE_INCLUDE_NUMPY___DTYPE_API_H_ -#define NUMPY_CORE_INCLUDE_NUMPY___DTYPE_API_H_ - -#define __EXPERIMENTAL_DTYPE_API_VERSION 11 - -struct PyArrayMethodObject_tag; - -/* - * Largely opaque struct for DType classes (i.e. metaclass instances). - * The internal definition is currently in `ndarraytypes.h` (export is a bit - * more complex because `PyArray_Descr` is a DTypeMeta internally but not - * externally). - */ -#if !(defined(NPY_INTERNAL_BUILD) && NPY_INTERNAL_BUILD) - - typedef struct PyArray_DTypeMeta_tag { - PyHeapTypeObject super; - - /* - * Most DTypes will have a singleton default instance, for the - * parametric legacy DTypes (bytes, string, void, datetime) this - * may be a pointer to the *prototype* instance? - */ - PyArray_Descr *singleton; - /* Copy of the legacy DTypes type number, usually invalid. */ - int type_num; - - /* The type object of the scalar instances (may be NULL?) */ - PyTypeObject *scalar_type; - /* - * DType flags to signal legacy, parametric, or - * abstract. But plenty of space for additional information/flags. - */ - npy_uint64 flags; - - /* - * Use indirection in order to allow a fixed size for this struct. - * A stable ABI size makes creating a static DType less painful - * while also ensuring flexibility for all opaque API (with one - * indirection due the pointer lookup). - */ - void *dt_slots; - /* Allow growing (at the moment also beyond this) */ - void *reserved[3]; - } PyArray_DTypeMeta; - -#endif /* not internal build */ - -/* - * ****************************************************** - * ArrayMethod API (Casting and UFuncs) - * ****************************************************** - */ -/* - * NOTE: Expected changes: - * * probably split runtime and general flags into two - * * should possibly not use an enum for typedef for more stable ABI? - */ -typedef enum { - /* Flag for whether the GIL is required */ - NPY_METH_REQUIRES_PYAPI = 1 << 0, - /* - * Some functions cannot set floating point error flags, this flag - * gives us the option (not requirement) to skip floating point error - * setup/check. No function should set error flags and ignore them - * since it would interfere with chaining operations (e.g. casting). - */ - NPY_METH_NO_FLOATINGPOINT_ERRORS = 1 << 1, - /* Whether the method supports unaligned access (not runtime) */ - NPY_METH_SUPPORTS_UNALIGNED = 1 << 2, - /* - * Used for reductions to allow reordering the operation. At this point - * assume that if set, it also applies to normal operations though! - */ - NPY_METH_IS_REORDERABLE = 1 << 3, - /* - * Private flag for now for *logic* functions. The logical functions - * `logical_or` and `logical_and` can always cast the inputs to booleans - * "safely" (because that is how the cast to bool is defined). - * @seberg: I am not sure this is the best way to handle this, so its - * private for now (also it is very limited anyway). - * There is one "exception". NA aware dtypes cannot cast to bool - * (hopefully), so the `??->?` loop should error even with this flag. - * But a second NA fallback loop will be necessary. - */ - _NPY_METH_FORCE_CAST_INPUTS = 1 << 17, - - /* All flags which can change at runtime */ - NPY_METH_RUNTIME_FLAGS = ( - NPY_METH_REQUIRES_PYAPI | - NPY_METH_NO_FLOATINGPOINT_ERRORS), -} NPY_ARRAYMETHOD_FLAGS; - - -typedef struct PyArrayMethod_Context_tag { - /* The caller, which is typically the original ufunc. May be NULL */ - PyObject *caller; - /* The method "self". Publically currentl an opaque object. */ - struct PyArrayMethodObject_tag *method; - - /* Operand descriptors, filled in by resolve_descriptors */ - PyArray_Descr **descriptors; - /* Structure may grow (this is harmless for DType authors) */ -} PyArrayMethod_Context; - - -/* - * The main object for creating a new ArrayMethod. We use the typical `slots` - * mechanism used by the Python limited API (see below for the slot defs). - */ -typedef struct { - const char *name; - int nin, nout; - NPY_CASTING casting; - NPY_ARRAYMETHOD_FLAGS flags; - PyArray_DTypeMeta **dtypes; - PyType_Slot *slots; -} PyArrayMethod_Spec; - - -/* - * ArrayMethod slots - * ----------------- - * - * SLOTS IDs For the ArrayMethod creation, once fully public, IDs are fixed - * but can be deprecated and arbitrarily extended. - */ -#define NPY_METH_resolve_descriptors 1 -/* We may want to adapt the `get_loop` signature a bit: */ -#define _NPY_METH_get_loop 2 -#define NPY_METH_get_reduction_initial 3 -/* specific loops for constructions/default get_loop: */ -#define NPY_METH_strided_loop 4 -#define NPY_METH_contiguous_loop 5 -#define NPY_METH_unaligned_strided_loop 6 -#define NPY_METH_unaligned_contiguous_loop 7 -#define NPY_METH_contiguous_indexed_loop 8 - -/* - * The resolve descriptors function, must be able to handle NULL values for - * all output (but not input) `given_descrs` and fill `loop_descrs`. - * Return -1 on error or 0 if the operation is not possible without an error - * set. (This may still be in flux.) - * Otherwise must return the "casting safety", for normal functions, this is - * almost always "safe" (or even "equivalent"?). - * - * `resolve_descriptors` is optional if all output DTypes are non-parametric. - */ -typedef NPY_CASTING (resolve_descriptors_function)( - /* "method" is currently opaque (necessary e.g. to wrap Python) */ - struct PyArrayMethodObject_tag *method, - /* DTypes the method was created for */ - PyArray_DTypeMeta **dtypes, - /* Input descriptors (instances). Outputs may be NULL. */ - PyArray_Descr **given_descrs, - /* Exact loop descriptors to use, must not hold references on error */ - PyArray_Descr **loop_descrs, - npy_intp *view_offset); - - -typedef int (PyArrayMethod_StridedLoop)(PyArrayMethod_Context *context, - char *const *data, const npy_intp *dimensions, const npy_intp *strides, - NpyAuxData *transferdata); - - -typedef int (get_loop_function)( - PyArrayMethod_Context *context, - int aligned, int move_references, - const npy_intp *strides, - PyArrayMethod_StridedLoop **out_loop, - NpyAuxData **out_transferdata, - NPY_ARRAYMETHOD_FLAGS *flags); - -/** - * Query an ArrayMethod for the initial value for use in reduction. - * - * @param context The arraymethod context, mainly to access the descriptors. - * @param reduction_is_empty Whether the reduction is empty. When it is, the - * value returned may differ. In this case it is a "default" value that - * may differ from the "identity" value normally used. For example: - * - `0.0` is the default for `sum([])`. But `-0.0` is the correct - * identity otherwise as it preserves the sign for `sum([-0.0])`. - * - We use no identity for object, but return the default of `0` and `1` - * for the empty `sum([], dtype=object)` and `prod([], dtype=object)`. - * This allows `np.sum(np.array(["a", "b"], dtype=object))` to work. - * - `-inf` or `INT_MIN` for `max` is an identity, but at least `INT_MIN` - * not a good *default* when there are no items. - * @param initial Pointer to initial data to be filled (if possible) - * - * @returns -1, 0, or 1 indicating error, no initial value, and initial being - * successfully filled. Errors must not be given where 0 is correct, NumPy - * may call this even when not strictly necessary. - */ -typedef int (get_reduction_initial_function)( - PyArrayMethod_Context *context, npy_bool reduction_is_empty, - char *initial); - -/* - * The following functions are only used by the wrapping array method defined - * in umath/wrapping_array_method.c - */ - -/* - * The function to convert the given descriptors (passed in to - * `resolve_descriptors`) and translates them for the wrapped loop. - * The new descriptors MUST be viewable with the old ones, `NULL` must be - * supported (for outputs) and should normally be forwarded. - * - * The function must clean up on error. - * - * NOTE: We currently assume that this translation gives "viewable" results. - * I.e. there is no additional casting related to the wrapping process. - * In principle that could be supported, but not sure it is useful. - * This currently also means that e.g. alignment must apply identically - * to the new dtypes. - * - * TODO: Due to the fact that `resolve_descriptors` is also used for `can_cast` - * there is no way to "pass out" the result of this function. This means - * it will be called twice for every ufunc call. - * (I am considering including `auxdata` as an "optional" parameter to - * `resolve_descriptors`, so that it can be filled there if not NULL.) - */ -typedef int translate_given_descrs_func(int nin, int nout, - PyArray_DTypeMeta *wrapped_dtypes[], - PyArray_Descr *given_descrs[], PyArray_Descr *new_descrs[]); - -/** - * The function to convert the actual loop descriptors (as returned by the - * original `resolve_descriptors` function) to the ones the output array - * should use. - * This function must return "viewable" types, it must not mutate them in any - * form that would break the inner-loop logic. Does not need to support NULL. - * - * The function must clean up on error. - * - * @param nargs Number of arguments - * @param new_dtypes The DTypes of the output (usually probably not needed) - * @param given_descrs Original given_descrs to the resolver, necessary to - * fetch any information related to the new dtypes from the original. - * @param original_descrs The `loop_descrs` returned by the wrapped loop. - * @param loop_descrs The output descriptors, compatible to `original_descrs`. - * - * @returns 0 on success, -1 on failure. - */ -typedef int translate_loop_descrs_func(int nin, int nout, - PyArray_DTypeMeta *new_dtypes[], PyArray_Descr *given_descrs[], - PyArray_Descr *original_descrs[], PyArray_Descr *loop_descrs[]); - - -/* - * A traverse loop working on a single array. This is similar to the general - * strided-loop function. This is designed for loops that need to visit every - * element of a single array. - * - * Currently this is used for array clearing, via the NPY_DT_get_clear_loop - * API hook, and zero-filling, via the NPY_DT_get_fill_zero_loop API hook. - * These are most useful for handling arrays storing embedded references to - * python objects or heap-allocated data. - * - * The `void *traverse_context` is passed in because we may need to pass in - * Intepreter state or similar in the future, but we don't want to pass in - * a full context (with pointers to dtypes, method, caller which all make - * no sense for a traverse function). - * - * We assume for now that this context can be just passed through in the - * the future (for structured dtypes). - * - */ -typedef int (traverse_loop_function)( - void *traverse_context, PyArray_Descr *descr, char *data, - npy_intp size, npy_intp stride, NpyAuxData *auxdata); - - -/* - * Simplified get_loop function specific to dtype traversal - * - * It should set the flags needed for the traversal loop and set out_loop to the - * loop function, which must be a valid traverse_loop_function - * pointer. Currently this is used for zero-filling and clearing arrays storing - * embedded references. - * - */ -typedef int (get_traverse_loop_function)( - void *traverse_context, PyArray_Descr *descr, - int aligned, npy_intp fixed_stride, - traverse_loop_function **out_loop, NpyAuxData **out_auxdata, - NPY_ARRAYMETHOD_FLAGS *flags); - - -/* - * **************************** - * DTYPE API - * **************************** - */ - -#define NPY_DT_ABSTRACT 1 << 1 -#define NPY_DT_PARAMETRIC 1 << 2 -#define NPY_DT_NUMERIC 1 << 3 - -/* - * These correspond to slots in the NPY_DType_Slots struct and must - * be in the same order as the members of that struct. If new slots - * get added or old slots get removed NPY_NUM_DTYPE_SLOTS must also - * be updated - */ - -#define NPY_DT_discover_descr_from_pyobject 1 -// this slot is considered private because its API hasn't beed decided -#define _NPY_DT_is_known_scalar_type 2 -#define NPY_DT_default_descr 3 -#define NPY_DT_common_dtype 4 -#define NPY_DT_common_instance 5 -#define NPY_DT_ensure_canonical 6 -#define NPY_DT_setitem 7 -#define NPY_DT_getitem 8 -#define NPY_DT_get_clear_loop 9 -#define NPY_DT_get_fill_zero_loop 10 - -// These PyArray_ArrFunc slots will be deprecated and replaced eventually -// getitem and setitem can be defined as a performance optimization; -// by default the user dtypes call `legacy_getitem_using_DType` and -// `legacy_setitem_using_DType`, respectively. This functionality is -// only supported for basic NumPy DTypes. - - -// used to separate dtype slots from arrfuncs slots -// intended only for internal use but defined here for clarity -#define _NPY_DT_ARRFUNCS_OFFSET (1 << 10) - -// Cast is disabled -// #define NPY_DT_PyArray_ArrFuncs_cast 0 + _NPY_DT_ARRFUNCS_OFFSET - -#define NPY_DT_PyArray_ArrFuncs_getitem 1 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_setitem 2 + _NPY_DT_ARRFUNCS_OFFSET - -#define NPY_DT_PyArray_ArrFuncs_copyswapn 3 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_copyswap 4 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_compare 5 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_argmax 6 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_dotfunc 7 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_scanfunc 8 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_fromstr 9 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_nonzero 10 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_fill 11 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_fillwithscalar 12 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_sort 13 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_argsort 14 + _NPY_DT_ARRFUNCS_OFFSET - -// Casting related slots are disabled. See -// https://github.com/numpy/numpy/pull/23173#discussion_r1101098163 -// #define NPY_DT_PyArray_ArrFuncs_castdict 15 + _NPY_DT_ARRFUNCS_OFFSET -// #define NPY_DT_PyArray_ArrFuncs_scalarkind 16 + _NPY_DT_ARRFUNCS_OFFSET -// #define NPY_DT_PyArray_ArrFuncs_cancastscalarkindto 17 + _NPY_DT_ARRFUNCS_OFFSET -// #define NPY_DT_PyArray_ArrFuncs_cancastto 18 + _NPY_DT_ARRFUNCS_OFFSET - -// These are deprecated in NumPy 1.19, so are disabled here. -// #define NPY_DT_PyArray_ArrFuncs_fastclip 19 + _NPY_DT_ARRFUNCS_OFFSET -// #define NPY_DT_PyArray_ArrFuncs_fastputmask 20 + _NPY_DT_ARRFUNCS_OFFSET -// #define NPY_DT_PyArray_ArrFuncs_fasttake 21 + _NPY_DT_ARRFUNCS_OFFSET -#define NPY_DT_PyArray_ArrFuncs_argmin 22 + _NPY_DT_ARRFUNCS_OFFSET - -// TODO: These slots probably still need some thought, and/or a way to "grow"? -typedef struct { - PyTypeObject *typeobj; /* type of python scalar or NULL */ - int flags; /* flags, including parametric and abstract */ - /* NULL terminated cast definitions. Use NULL for the newly created DType */ - PyArrayMethod_Spec **casts; - PyType_Slot *slots; - /* Baseclass or NULL (will always subclass `np.dtype`) */ - PyTypeObject *baseclass; -} PyArrayDTypeMeta_Spec; - - -typedef PyArray_Descr *(discover_descr_from_pyobject_function)( - PyArray_DTypeMeta *cls, PyObject *obj); - -/* - * Before making this public, we should decide whether it should pass - * the type, or allow looking at the object. A possible use-case: - * `np.array(np.array([0]), dtype=np.ndarray)` - * Could consider arrays that are not `dtype=ndarray` "scalars". - */ -typedef int (is_known_scalar_type_function)( - PyArray_DTypeMeta *cls, PyTypeObject *obj); - -typedef PyArray_Descr *(default_descr_function)(PyArray_DTypeMeta *cls); -typedef PyArray_DTypeMeta *(common_dtype_function)( - PyArray_DTypeMeta *dtype1, PyArray_DTypeMeta *dtype2); -typedef PyArray_Descr *(common_instance_function)( - PyArray_Descr *dtype1, PyArray_Descr *dtype2); -typedef PyArray_Descr *(ensure_canonical_function)(PyArray_Descr *dtype); - -/* - * TODO: These two functions are currently only used for experimental DType - * API support. Their relation should be "reversed": NumPy should - * always use them internally. - * There are open points about "casting safety" though, e.g. setting - * elements is currently always unsafe. - */ -typedef int(setitemfunction)(PyArray_Descr *, PyObject *, char *); -typedef PyObject *(getitemfunction)(PyArray_Descr *, char *); - - -#endif /* NUMPY_CORE_INCLUDE_NUMPY___DTYPE_API_H_ */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/indexers/utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/indexers/utils.py deleted file mode 100644 index 55bb58f3108c3d7004058494284ea6fb4b2fca7f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/indexers/utils.py +++ /dev/null @@ -1,553 +0,0 @@ -""" -Low-dependency indexing utilities. -""" -from __future__ import annotations - -from typing import ( - TYPE_CHECKING, - Any, -) - -import numpy as np - -from pandas._libs import lib - -from pandas.core.dtypes.common import ( - is_array_like, - is_bool_dtype, - is_integer, - is_integer_dtype, - is_list_like, -) -from pandas.core.dtypes.dtypes import ExtensionDtype -from pandas.core.dtypes.generic import ( - ABCIndex, - ABCSeries, -) - -if TYPE_CHECKING: - from pandas._typing import AnyArrayLike - - from pandas.core.frame import DataFrame - from pandas.core.indexes.base import Index - -# ----------------------------------------------------------- -# Indexer Identification - - -def is_valid_positional_slice(slc: slice) -> bool: - """ - Check if a slice object can be interpreted as a positional indexer. - - Parameters - ---------- - slc : slice - - Returns - ------- - bool - - Notes - ----- - A valid positional slice may also be interpreted as a label-based slice - depending on the index being sliced. - """ - return ( - lib.is_int_or_none(slc.start) - and lib.is_int_or_none(slc.stop) - and lib.is_int_or_none(slc.step) - ) - - -def is_list_like_indexer(key) -> bool: - """ - Check if we have a list-like indexer that is *not* a NamedTuple. - - Parameters - ---------- - key : object - - Returns - ------- - bool - """ - # allow a list_like, but exclude NamedTuples which can be indexers - return is_list_like(key) and not (isinstance(key, tuple) and type(key) is not tuple) - - -def is_scalar_indexer(indexer, ndim: int) -> bool: - """ - Return True if we are all scalar indexers. - - Parameters - ---------- - indexer : object - ndim : int - Number of dimensions in the object being indexed. - - Returns - ------- - bool - """ - if ndim == 1 and is_integer(indexer): - # GH37748: allow indexer to be an integer for Series - return True - if isinstance(indexer, tuple) and len(indexer) == ndim: - return all(is_integer(x) for x in indexer) - return False - - -def is_empty_indexer(indexer) -> bool: - """ - Check if we have an empty indexer. - - Parameters - ---------- - indexer : object - - Returns - ------- - bool - """ - if is_list_like(indexer) and not len(indexer): - return True - if not isinstance(indexer, tuple): - indexer = (indexer,) - return any(isinstance(idx, np.ndarray) and len(idx) == 0 for idx in indexer) - - -# ----------------------------------------------------------- -# Indexer Validation - - -def check_setitem_lengths(indexer, value, values) -> bool: - """ - Validate that value and indexer are the same length. - - An special-case is allowed for when the indexer is a boolean array - and the number of true values equals the length of ``value``. In - this case, no exception is raised. - - Parameters - ---------- - indexer : sequence - Key for the setitem. - value : array-like - Value for the setitem. - values : array-like - Values being set into. - - Returns - ------- - bool - Whether this is an empty listlike setting which is a no-op. - - Raises - ------ - ValueError - When the indexer is an ndarray or list and the lengths don't match. - """ - no_op = False - - if isinstance(indexer, (np.ndarray, list)): - # We can ignore other listlikes because they are either - # a) not necessarily 1-D indexers, e.g. tuple - # b) boolean indexers e.g. BoolArray - if is_list_like(value): - if len(indexer) != len(value) and values.ndim == 1: - # boolean with truth values == len of the value is ok too - if isinstance(indexer, list): - indexer = np.array(indexer) - if not ( - isinstance(indexer, np.ndarray) - and indexer.dtype == np.bool_ - and indexer.sum() == len(value) - ): - raise ValueError( - "cannot set using a list-like indexer " - "with a different length than the value" - ) - if not len(indexer): - no_op = True - - elif isinstance(indexer, slice): - if is_list_like(value): - if len(value) != length_of_indexer(indexer, values) and values.ndim == 1: - # In case of two dimensional value is used row-wise and broadcasted - raise ValueError( - "cannot set using a slice indexer with a " - "different length than the value" - ) - if not len(value): - no_op = True - - return no_op - - -def validate_indices(indices: np.ndarray, n: int) -> None: - """ - Perform bounds-checking for an indexer. - - -1 is allowed for indicating missing values. - - Parameters - ---------- - indices : ndarray - n : int - Length of the array being indexed. - - Raises - ------ - ValueError - - Examples - -------- - >>> validate_indices(np.array([1, 2]), 3) # OK - - >>> validate_indices(np.array([1, -2]), 3) - Traceback (most recent call last): - ... - ValueError: negative dimensions are not allowed - - >>> validate_indices(np.array([1, 2, 3]), 3) - Traceback (most recent call last): - ... - IndexError: indices are out-of-bounds - - >>> validate_indices(np.array([-1, -1]), 0) # OK - - >>> validate_indices(np.array([0, 1]), 0) - Traceback (most recent call last): - ... - IndexError: indices are out-of-bounds - """ - if len(indices): - min_idx = indices.min() - if min_idx < -1: - msg = f"'indices' contains values less than allowed ({min_idx} < -1)" - raise ValueError(msg) - - max_idx = indices.max() - if max_idx >= n: - raise IndexError("indices are out-of-bounds") - - -# ----------------------------------------------------------- -# Indexer Conversion - - -def maybe_convert_indices(indices, n: int, verify: bool = True) -> np.ndarray: - """ - Attempt to convert indices into valid, positive indices. - - If we have negative indices, translate to positive here. - If we have indices that are out-of-bounds, raise an IndexError. - - Parameters - ---------- - indices : array-like - Array of indices that we are to convert. - n : int - Number of elements in the array that we are indexing. - verify : bool, default True - Check that all entries are between 0 and n - 1, inclusive. - - Returns - ------- - array-like - An array-like of positive indices that correspond to the ones - that were passed in initially to this function. - - Raises - ------ - IndexError - One of the converted indices either exceeded the number of, - elements (specified by `n`), or was still negative. - """ - if isinstance(indices, list): - indices = np.array(indices) - if len(indices) == 0: - # If `indices` is empty, np.array will return a float, - # and will cause indexing errors. - return np.empty(0, dtype=np.intp) - - mask = indices < 0 - if mask.any(): - indices = indices.copy() - indices[mask] += n - - if verify: - mask = (indices >= n) | (indices < 0) - if mask.any(): - raise IndexError("indices are out-of-bounds") - return indices - - -# ----------------------------------------------------------- -# Unsorted - - -def length_of_indexer(indexer, target=None) -> int: - """ - Return the expected length of target[indexer] - - Returns - ------- - int - """ - if target is not None and isinstance(indexer, slice): - target_len = len(target) - start = indexer.start - stop = indexer.stop - step = indexer.step - if start is None: - start = 0 - elif start < 0: - start += target_len - if stop is None or stop > target_len: - stop = target_len - elif stop < 0: - stop += target_len - if step is None: - step = 1 - elif step < 0: - start, stop = stop + 1, start + 1 - step = -step - return (stop - start + step - 1) // step - elif isinstance(indexer, (ABCSeries, ABCIndex, np.ndarray, list)): - if isinstance(indexer, list): - indexer = np.array(indexer) - - if indexer.dtype == bool: - # GH#25774 - return indexer.sum() - return len(indexer) - elif isinstance(indexer, range): - return (indexer.stop - indexer.start) // indexer.step - elif not is_list_like_indexer(indexer): - return 1 - raise AssertionError("cannot find the length of the indexer") - - -def disallow_ndim_indexing(result) -> None: - """ - Helper function to disallow multi-dimensional indexing on 1D Series/Index. - - GH#27125 indexer like idx[:, None] expands dim, but we cannot do that - and keep an index, so we used to return ndarray, which was deprecated - in GH#30588. - """ - if np.ndim(result) > 1: - raise ValueError( - "Multi-dimensional indexing (e.g. `obj[:, None]`) is no longer " - "supported. Convert to a numpy array before indexing instead." - ) - - -def unpack_1tuple(tup): - """ - If we have a length-1 tuple/list that contains a slice, unpack to just - the slice. - - Notes - ----- - The list case is deprecated. - """ - if len(tup) == 1 and isinstance(tup[0], slice): - # if we don't have a MultiIndex, we may still be able to handle - # a 1-tuple. see test_1tuple_without_multiindex - - if isinstance(tup, list): - # GH#31299 - raise ValueError( - "Indexing with a single-item list containing a " - "slice is not allowed. Pass a tuple instead.", - ) - - return tup[0] - return tup - - -def check_key_length(columns: Index, key, value: DataFrame) -> None: - """ - Checks if a key used as indexer has the same length as the columns it is - associated with. - - Parameters - ---------- - columns : Index The columns of the DataFrame to index. - key : A list-like of keys to index with. - value : DataFrame The value to set for the keys. - - Raises - ------ - ValueError: If the length of key is not equal to the number of columns in value - or if the number of columns referenced by key is not equal to number - of columns. - """ - if columns.is_unique: - if len(value.columns) != len(key): - raise ValueError("Columns must be same length as key") - else: - # Missing keys in columns are represented as -1 - if len(columns.get_indexer_non_unique(key)[0]) != len(value.columns): - raise ValueError("Columns must be same length as key") - - -def unpack_tuple_and_ellipses(item: tuple): - """ - Possibly unpack arr[..., n] to arr[n] - """ - if len(item) > 1: - # Note: we are assuming this indexing is being done on a 1D arraylike - if item[0] is Ellipsis: - item = item[1:] - elif item[-1] is Ellipsis: - item = item[:-1] - - if len(item) > 1: - raise IndexError("too many indices for array.") - - item = item[0] - return item - - -# ----------------------------------------------------------- -# Public indexer validation - - -def check_array_indexer(array: AnyArrayLike, indexer: Any) -> Any: - """ - Check if `indexer` is a valid array indexer for `array`. - - For a boolean mask, `array` and `indexer` are checked to have the same - length. The dtype is validated, and if it is an integer or boolean - ExtensionArray, it is checked if there are missing values present, and - it is converted to the appropriate numpy array. Other dtypes will raise - an error. - - Non-array indexers (integer, slice, Ellipsis, tuples, ..) are passed - through as is. - - Parameters - ---------- - array : array-like - The array that is being indexed (only used for the length). - indexer : array-like or list-like - The array-like that's used to index. List-like input that is not yet - a numpy array or an ExtensionArray is converted to one. Other input - types are passed through as is. - - Returns - ------- - numpy.ndarray - The validated indexer as a numpy array that can be used to index. - - Raises - ------ - IndexError - When the lengths don't match. - ValueError - When `indexer` cannot be converted to a numpy ndarray to index - (e.g. presence of missing values). - - See Also - -------- - api.types.is_bool_dtype : Check if `key` is of boolean dtype. - - Examples - -------- - When checking a boolean mask, a boolean ndarray is returned when the - arguments are all valid. - - >>> mask = pd.array([True, False]) - >>> arr = pd.array([1, 2]) - >>> pd.api.indexers.check_array_indexer(arr, mask) - array([ True, False]) - - An IndexError is raised when the lengths don't match. - - >>> mask = pd.array([True, False, True]) - >>> pd.api.indexers.check_array_indexer(arr, mask) - Traceback (most recent call last): - ... - IndexError: Boolean index has wrong length: 3 instead of 2. - - NA values in a boolean array are treated as False. - - >>> mask = pd.array([True, pd.NA]) - >>> pd.api.indexers.check_array_indexer(arr, mask) - array([ True, False]) - - A numpy boolean mask will get passed through (if the length is correct): - - >>> mask = np.array([True, False]) - >>> pd.api.indexers.check_array_indexer(arr, mask) - array([ True, False]) - - Similarly for integer indexers, an integer ndarray is returned when it is - a valid indexer, otherwise an error is (for integer indexers, a matching - length is not required): - - >>> indexer = pd.array([0, 2], dtype="Int64") - >>> arr = pd.array([1, 2, 3]) - >>> pd.api.indexers.check_array_indexer(arr, indexer) - array([0, 2]) - - >>> indexer = pd.array([0, pd.NA], dtype="Int64") - >>> pd.api.indexers.check_array_indexer(arr, indexer) - Traceback (most recent call last): - ... - ValueError: Cannot index with an integer indexer containing NA values - - For non-integer/boolean dtypes, an appropriate error is raised: - - >>> indexer = np.array([0., 2.], dtype="float64") - >>> pd.api.indexers.check_array_indexer(arr, indexer) - Traceback (most recent call last): - ... - IndexError: arrays used as indices must be of integer or boolean type - """ - from pandas.core.construction import array as pd_array - - # whatever is not an array-like is returned as-is (possible valid array - # indexers that are not array-like: integer, slice, Ellipsis, None) - # In this context, tuples are not considered as array-like, as they have - # a specific meaning in indexing (multi-dimensional indexing) - if is_list_like(indexer): - if isinstance(indexer, tuple): - return indexer - else: - return indexer - - # convert list-likes to array - if not is_array_like(indexer): - indexer = pd_array(indexer) - if len(indexer) == 0: - # empty list is converted to float array by pd.array - indexer = np.array([], dtype=np.intp) - - dtype = indexer.dtype - if is_bool_dtype(dtype): - if isinstance(dtype, ExtensionDtype): - indexer = indexer.to_numpy(dtype=bool, na_value=False) - else: - indexer = np.asarray(indexer, dtype=bool) - - # GH26658 - if len(indexer) != len(array): - raise IndexError( - f"Boolean index has wrong length: " - f"{len(indexer)} instead of {len(array)}" - ) - elif is_integer_dtype(dtype): - try: - indexer = np.asarray(indexer, dtype=np.intp) - except ValueError as err: - raise ValueError( - "Cannot index with an integer indexer containing NA values" - ) from err - else: - raise IndexError("arrays used as indices must be of integer or boolean type") - - return indexer diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/base_class/test_reshape.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/base_class/test_reshape.py deleted file mode 100644 index 6586f5f9de4801a567e0e4311df6c14a438ec208..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/base_class/test_reshape.py +++ /dev/null @@ -1,93 +0,0 @@ -""" -Tests for ndarray-like method on the base Index class -""" -import numpy as np -import pytest - -from pandas import Index -import pandas._testing as tm - - -class TestReshape: - def test_repeat(self): - repeats = 2 - index = Index([1, 2, 3]) - expected = Index([1, 1, 2, 2, 3, 3]) - - result = index.repeat(repeats) - tm.assert_index_equal(result, expected) - - def test_insert(self): - # GH 7256 - # validate neg/pos inserts - result = Index(["b", "c", "d"]) - - # test 0th element - tm.assert_index_equal(Index(["a", "b", "c", "d"]), result.insert(0, "a")) - - # test Nth element that follows Python list behavior - tm.assert_index_equal(Index(["b", "c", "e", "d"]), result.insert(-1, "e")) - - # test loc +/- neq (0, -1) - tm.assert_index_equal(result.insert(1, "z"), result.insert(-2, "z")) - - # test empty - null_index = Index([]) - tm.assert_index_equal(Index(["a"]), null_index.insert(0, "a")) - - def test_insert_missing(self, nulls_fixture): - # GH#22295 - # test there is no mangling of NA values - expected = Index(["a", nulls_fixture, "b", "c"]) - result = Index(list("abc")).insert(1, nulls_fixture) - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize( - "val", [(1, 2), np.datetime64("2019-12-31"), np.timedelta64(1, "D")] - ) - @pytest.mark.parametrize("loc", [-1, 2]) - def test_insert_datetime_into_object(self, loc, val): - # GH#44509 - idx = Index(["1", "2", "3"]) - result = idx.insert(loc, val) - expected = Index(["1", "2", val, "3"]) - tm.assert_index_equal(result, expected) - assert type(expected[2]) is type(val) - - def test_insert_none_into_string_numpy(self): - # GH#55365 - pytest.importorskip("pyarrow") - index = Index(["a", "b", "c"], dtype="string[pyarrow_numpy]") - result = index.insert(-1, None) - expected = Index(["a", "b", None, "c"], dtype="string[pyarrow_numpy]") - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize( - "pos,expected", - [ - (0, Index(["b", "c", "d"], name="index")), - (-1, Index(["a", "b", "c"], name="index")), - ], - ) - def test_delete(self, pos, expected): - index = Index(["a", "b", "c", "d"], name="index") - result = index.delete(pos) - tm.assert_index_equal(result, expected) - assert result.name == expected.name - - def test_delete_raises(self): - index = Index(["a", "b", "c", "d"], name="index") - msg = "index 5 is out of bounds for axis 0 with size 4" - with pytest.raises(IndexError, match=msg): - index.delete(5) - - def test_append_multiple(self): - index = Index(["a", "b", "c", "d", "e", "f"]) - - foos = [index[:2], index[2:4], index[4:]] - result = foos[0].append(foos[1:]) - tm.assert_index_equal(result, index) - - # empty - result = index.append([]) - tm.assert_index_equal(result, index) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/test_constructors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/test_constructors.py deleted file mode 100644 index fc7ce19c42b3819e4833873836fe14407168bc80..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/test_constructors.py +++ /dev/null @@ -1,1118 +0,0 @@ -from __future__ import annotations - -from datetime import ( - datetime, - timedelta, - timezone, -) -from functools import partial -from operator import attrgetter - -import dateutil -import numpy as np -import pytest -import pytz - -from pandas._libs.tslibs import ( - OutOfBoundsDatetime, - astype_overflowsafe, -) - -import pandas as pd -from pandas import ( - DatetimeIndex, - Index, - Timestamp, - date_range, - offsets, - to_datetime, -) -import pandas._testing as tm -from pandas.core.arrays import ( - DatetimeArray, - period_array, -) - - -class TestDatetimeIndex: - def test_closed_deprecated(self): - # GH#52628 - msg = "The 'closed' keyword" - with tm.assert_produces_warning(FutureWarning, match=msg): - DatetimeIndex([], closed=True) - - def test_normalize_deprecated(self): - # GH#52628 - msg = "The 'normalize' keyword" - with tm.assert_produces_warning(FutureWarning, match=msg): - DatetimeIndex([], normalize=True) - - def test_from_dt64_unsupported_unit(self): - # GH#49292 - val = np.datetime64(1, "D") - result = DatetimeIndex([val], tz="US/Pacific") - - expected = DatetimeIndex([val.astype("M8[s]")], tz="US/Pacific") - tm.assert_index_equal(result, expected) - - def test_explicit_tz_none(self): - # GH#48659 - dti = date_range("2016-01-01", periods=10, tz="UTC") - - msg = "Passed data is timezone-aware, incompatible with 'tz=None'" - with pytest.raises(ValueError, match=msg): - DatetimeIndex(dti, tz=None) - - with pytest.raises(ValueError, match=msg): - DatetimeIndex(np.array(dti), tz=None) - - msg = "Cannot pass both a timezone-aware dtype and tz=None" - with pytest.raises(ValueError, match=msg): - DatetimeIndex([], dtype="M8[ns, UTC]", tz=None) - - @pytest.mark.parametrize( - "dt_cls", [DatetimeIndex, DatetimeArray._from_sequence_not_strict] - ) - def test_freq_validation_with_nat(self, dt_cls): - # GH#11587 make sure we get a useful error message when generate_range - # raises - msg = ( - "Inferred frequency None from passed values does not conform " - "to passed frequency D" - ) - with pytest.raises(ValueError, match=msg): - dt_cls([pd.NaT, Timestamp("2011-01-01")], freq="D") - with pytest.raises(ValueError, match=msg): - dt_cls([pd.NaT, Timestamp("2011-01-01")._value], freq="D") - - # TODO: better place for tests shared by DTI/TDI? - @pytest.mark.parametrize( - "index", - [ - date_range("2016-01-01", periods=5, tz="US/Pacific"), - pd.timedelta_range("1 Day", periods=5), - ], - ) - def test_shallow_copy_inherits_array_freq(self, index): - # If we pass a DTA/TDA to shallow_copy and dont specify a freq, - # we should inherit the array's freq, not our own. - array = index._data - - arr = array[[0, 3, 2, 4, 1]] - assert arr.freq is None - - result = index._shallow_copy(arr) - assert result.freq is None - - def test_categorical_preserves_tz(self): - # GH#18664 retain tz when going DTI-->Categorical-->DTI - dti = DatetimeIndex( - [pd.NaT, "2015-01-01", "1999-04-06 15:14:13", "2015-01-01"], tz="US/Eastern" - ) - - for dtobj in [dti, dti._data]: - # works for DatetimeIndex or DatetimeArray - - ci = pd.CategoricalIndex(dtobj) - carr = pd.Categorical(dtobj) - cser = pd.Series(ci) - - for obj in [ci, carr, cser]: - result = DatetimeIndex(obj) - tm.assert_index_equal(result, dti) - - def test_dti_with_period_data_raises(self): - # GH#23675 - data = pd.PeriodIndex(["2016Q1", "2016Q2"], freq="Q") - - with pytest.raises(TypeError, match="PeriodDtype data is invalid"): - DatetimeIndex(data) - - with pytest.raises(TypeError, match="PeriodDtype data is invalid"): - to_datetime(data) - - with pytest.raises(TypeError, match="PeriodDtype data is invalid"): - DatetimeIndex(period_array(data)) - - with pytest.raises(TypeError, match="PeriodDtype data is invalid"): - to_datetime(period_array(data)) - - def test_dti_with_timedelta64_data_raises(self): - # GH#23675 deprecated, enforrced in GH#29794 - data = np.array([0], dtype="m8[ns]") - msg = r"timedelta64\[ns\] cannot be converted to datetime64" - with pytest.raises(TypeError, match=msg): - DatetimeIndex(data) - - with pytest.raises(TypeError, match=msg): - to_datetime(data) - - with pytest.raises(TypeError, match=msg): - DatetimeIndex(pd.TimedeltaIndex(data)) - - with pytest.raises(TypeError, match=msg): - to_datetime(pd.TimedeltaIndex(data)) - - def test_constructor_from_sparse_array(self): - # https://github.com/pandas-dev/pandas/issues/35843 - values = [ - Timestamp("2012-05-01T01:00:00.000000"), - Timestamp("2016-05-01T01:00:00.000000"), - ] - arr = pd.arrays.SparseArray(values) - result = Index(arr) - assert type(result) is Index - assert result.dtype == arr.dtype - - def test_construction_caching(self): - df = pd.DataFrame( - { - "dt": date_range("20130101", periods=3), - "dttz": date_range("20130101", periods=3, tz="US/Eastern"), - "dt_with_null": [ - Timestamp("20130101"), - pd.NaT, - Timestamp("20130103"), - ], - "dtns": date_range("20130101", periods=3, freq="ns"), - } - ) - assert df.dttz.dtype.tz.zone == "US/Eastern" - - @pytest.mark.parametrize( - "kwargs", - [{"tz": "dtype.tz"}, {"dtype": "dtype"}, {"dtype": "dtype", "tz": "dtype.tz"}], - ) - def test_construction_with_alt(self, kwargs, tz_aware_fixture): - tz = tz_aware_fixture - i = date_range("20130101", periods=5, freq="H", tz=tz) - kwargs = {key: attrgetter(val)(i) for key, val in kwargs.items()} - result = DatetimeIndex(i, **kwargs) - tm.assert_index_equal(i, result) - - @pytest.mark.parametrize( - "kwargs", - [{"tz": "dtype.tz"}, {"dtype": "dtype"}, {"dtype": "dtype", "tz": "dtype.tz"}], - ) - def test_construction_with_alt_tz_localize(self, kwargs, tz_aware_fixture): - tz = tz_aware_fixture - i = date_range("20130101", periods=5, freq="H", tz=tz) - i = i._with_freq(None) - kwargs = {key: attrgetter(val)(i) for key, val in kwargs.items()} - - if "tz" in kwargs: - result = DatetimeIndex(i.asi8, tz="UTC").tz_convert(kwargs["tz"]) - - expected = DatetimeIndex(i, **kwargs) - tm.assert_index_equal(result, expected) - - # localize into the provided tz - i2 = DatetimeIndex(i.tz_localize(None).asi8, tz="UTC") - expected = i.tz_localize(None).tz_localize("UTC") - tm.assert_index_equal(i2, expected) - - # incompat tz/dtype - msg = "cannot supply both a tz and a dtype with a tz" - with pytest.raises(ValueError, match=msg): - DatetimeIndex(i.tz_localize(None).asi8, dtype=i.dtype, tz="US/Pacific") - - def test_construction_index_with_mixed_timezones(self): - # gh-11488: no tz results in DatetimeIndex - result = Index([Timestamp("2011-01-01"), Timestamp("2011-01-02")], name="idx") - exp = DatetimeIndex( - [Timestamp("2011-01-01"), Timestamp("2011-01-02")], name="idx" - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is None - - # same tz results in DatetimeIndex - result = Index( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="Asia/Tokyo"), - ], - name="idx", - ) - exp = DatetimeIndex( - [Timestamp("2011-01-01 10:00"), Timestamp("2011-01-02 10:00")], - tz="Asia/Tokyo", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is not None - assert result.tz == exp.tz - - # same tz results in DatetimeIndex (DST) - result = Index( - [ - Timestamp("2011-01-01 10:00", tz="US/Eastern"), - Timestamp("2011-08-01 10:00", tz="US/Eastern"), - ], - name="idx", - ) - exp = DatetimeIndex( - [Timestamp("2011-01-01 10:00"), Timestamp("2011-08-01 10:00")], - tz="US/Eastern", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is not None - assert result.tz == exp.tz - - # Different tz results in Index(dtype=object) - result = Index( - [ - Timestamp("2011-01-01 10:00"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - name="idx", - ) - exp = Index( - [ - Timestamp("2011-01-01 10:00"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - dtype="object", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert not isinstance(result, DatetimeIndex) - - result = Index( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - name="idx", - ) - exp = Index( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - dtype="object", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert not isinstance(result, DatetimeIndex) - - msg = "DatetimeIndex has mixed timezones" - msg_depr = "parsing datetimes with mixed time zones will raise an error" - with pytest.raises(TypeError, match=msg): - with tm.assert_produces_warning(FutureWarning, match=msg_depr): - DatetimeIndex(["2013-11-02 22:00-05:00", "2013-11-03 22:00-06:00"]) - - # length = 1 - result = Index([Timestamp("2011-01-01")], name="idx") - exp = DatetimeIndex([Timestamp("2011-01-01")], name="idx") - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is None - - # length = 1 with tz - result = Index([Timestamp("2011-01-01 10:00", tz="Asia/Tokyo")], name="idx") - exp = DatetimeIndex( - [Timestamp("2011-01-01 10:00")], tz="Asia/Tokyo", name="idx" - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is not None - assert result.tz == exp.tz - - def test_construction_index_with_mixed_timezones_with_NaT(self): - # see gh-11488 - result = Index( - [pd.NaT, Timestamp("2011-01-01"), pd.NaT, Timestamp("2011-01-02")], - name="idx", - ) - exp = DatetimeIndex( - [pd.NaT, Timestamp("2011-01-01"), pd.NaT, Timestamp("2011-01-02")], - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is None - - # Same tz results in DatetimeIndex - result = Index( - [ - pd.NaT, - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - pd.NaT, - Timestamp("2011-01-02 10:00", tz="Asia/Tokyo"), - ], - name="idx", - ) - exp = DatetimeIndex( - [ - pd.NaT, - Timestamp("2011-01-01 10:00"), - pd.NaT, - Timestamp("2011-01-02 10:00"), - ], - tz="Asia/Tokyo", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is not None - assert result.tz == exp.tz - - # same tz results in DatetimeIndex (DST) - result = Index( - [ - Timestamp("2011-01-01 10:00", tz="US/Eastern"), - pd.NaT, - Timestamp("2011-08-01 10:00", tz="US/Eastern"), - ], - name="idx", - ) - exp = DatetimeIndex( - [Timestamp("2011-01-01 10:00"), pd.NaT, Timestamp("2011-08-01 10:00")], - tz="US/Eastern", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is not None - assert result.tz == exp.tz - - # different tz results in Index(dtype=object) - result = Index( - [ - pd.NaT, - Timestamp("2011-01-01 10:00"), - pd.NaT, - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - name="idx", - ) - exp = Index( - [ - pd.NaT, - Timestamp("2011-01-01 10:00"), - pd.NaT, - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - dtype="object", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert not isinstance(result, DatetimeIndex) - - result = Index( - [ - pd.NaT, - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - pd.NaT, - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - name="idx", - ) - exp = Index( - [ - pd.NaT, - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - pd.NaT, - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - dtype="object", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert not isinstance(result, DatetimeIndex) - - # all NaT - result = Index([pd.NaT, pd.NaT], name="idx") - exp = DatetimeIndex([pd.NaT, pd.NaT], name="idx") - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - assert result.tz is None - - def test_construction_dti_with_mixed_timezones(self): - # GH 11488 (not changed, added explicit tests) - - # no tz results in DatetimeIndex - result = DatetimeIndex( - [Timestamp("2011-01-01"), Timestamp("2011-01-02")], name="idx" - ) - exp = DatetimeIndex( - [Timestamp("2011-01-01"), Timestamp("2011-01-02")], name="idx" - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - - # same tz results in DatetimeIndex - result = DatetimeIndex( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="Asia/Tokyo"), - ], - name="idx", - ) - exp = DatetimeIndex( - [Timestamp("2011-01-01 10:00"), Timestamp("2011-01-02 10:00")], - tz="Asia/Tokyo", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - - # same tz results in DatetimeIndex (DST) - result = DatetimeIndex( - [ - Timestamp("2011-01-01 10:00", tz="US/Eastern"), - Timestamp("2011-08-01 10:00", tz="US/Eastern"), - ], - name="idx", - ) - exp = DatetimeIndex( - [Timestamp("2011-01-01 10:00"), Timestamp("2011-08-01 10:00")], - tz="US/Eastern", - name="idx", - ) - tm.assert_index_equal(result, exp, exact=True) - assert isinstance(result, DatetimeIndex) - - # tz mismatch affecting to tz-aware raises TypeError/ValueError - - msg = "cannot be converted to datetime64" - with pytest.raises(ValueError, match=msg): - DatetimeIndex( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - name="idx", - ) - - # pre-2.0 this raised bc of awareness mismatch. in 2.0 with a tz# - # specified we behave as if this was called pointwise, so - # the naive Timestamp is treated as a wall time. - dti = DatetimeIndex( - [ - Timestamp("2011-01-01 10:00"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - tz="Asia/Tokyo", - name="idx", - ) - expected = DatetimeIndex( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="US/Eastern").tz_convert("Asia/Tokyo"), - ], - tz="Asia/Tokyo", - name="idx", - ) - tm.assert_index_equal(dti, expected) - - # pre-2.0 mixed-tz scalars raised even if a tz/dtype was specified. - # as of 2.0 we successfully return the requested tz/dtype - dti = DatetimeIndex( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - tz="US/Eastern", - name="idx", - ) - expected = DatetimeIndex( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo").tz_convert("US/Eastern"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - tz="US/Eastern", - name="idx", - ) - tm.assert_index_equal(dti, expected) - - # same thing but pass dtype instead of tz - dti = DatetimeIndex( - [ - Timestamp("2011-01-01 10:00", tz="Asia/Tokyo"), - Timestamp("2011-01-02 10:00", tz="US/Eastern"), - ], - dtype="M8[ns, US/Eastern]", - name="idx", - ) - tm.assert_index_equal(dti, expected) - - def test_construction_base_constructor(self): - arr = [Timestamp("2011-01-01"), pd.NaT, Timestamp("2011-01-03")] - tm.assert_index_equal(Index(arr), DatetimeIndex(arr)) - tm.assert_index_equal(Index(np.array(arr)), DatetimeIndex(np.array(arr))) - - arr = [np.nan, pd.NaT, Timestamp("2011-01-03")] - tm.assert_index_equal(Index(arr), DatetimeIndex(arr)) - tm.assert_index_equal(Index(np.array(arr)), DatetimeIndex(np.array(arr))) - - def test_construction_outofbounds(self): - # GH 13663 - dates = [ - datetime(3000, 1, 1), - datetime(4000, 1, 1), - datetime(5000, 1, 1), - datetime(6000, 1, 1), - ] - exp = Index(dates, dtype=object) - # coerces to object - tm.assert_index_equal(Index(dates), exp) - - msg = "^Out of bounds nanosecond timestamp: 3000-01-01 00:00:00, at position 0$" - with pytest.raises(OutOfBoundsDatetime, match=msg): - # can't create DatetimeIndex - DatetimeIndex(dates) - - def test_construction_with_ndarray(self): - # GH 5152 - dates = [datetime(2013, 10, 7), datetime(2013, 10, 8), datetime(2013, 10, 9)] - data = DatetimeIndex(dates, freq=offsets.BDay()).values - result = DatetimeIndex(data, freq=offsets.BDay()) - expected = DatetimeIndex(["2013-10-07", "2013-10-08", "2013-10-09"], freq="B") - tm.assert_index_equal(result, expected) - - def test_integer_values_and_tz_interpreted_as_utc(self): - # GH-24559 - val = np.datetime64("2000-01-01 00:00:00", "ns") - values = np.array([val.view("i8")]) - - result = DatetimeIndex(values).tz_localize("US/Central") - - expected = DatetimeIndex(["2000-01-01T00:00:00"], tz="US/Central") - tm.assert_index_equal(result, expected) - - # but UTC is *not* deprecated. - with tm.assert_produces_warning(None): - result = DatetimeIndex(values, tz="UTC") - expected = DatetimeIndex(["2000-01-01T00:00:00"], tz="US/Central") - - def test_constructor_coverage(self): - rng = date_range("1/1/2000", periods=10.5) - exp = date_range("1/1/2000", periods=10) - tm.assert_index_equal(rng, exp) - - msg = "periods must be a number, got foo" - with pytest.raises(TypeError, match=msg): - date_range(start="1/1/2000", periods="foo", freq="D") - - msg = r"DatetimeIndex\(\.\.\.\) must be called with a collection" - with pytest.raises(TypeError, match=msg): - DatetimeIndex("1/1/2000") - - # generator expression - gen = (datetime(2000, 1, 1) + timedelta(i) for i in range(10)) - result = DatetimeIndex(gen) - expected = DatetimeIndex( - [datetime(2000, 1, 1) + timedelta(i) for i in range(10)] - ) - tm.assert_index_equal(result, expected) - - # NumPy string array - strings = np.array(["2000-01-01", "2000-01-02", "2000-01-03"]) - result = DatetimeIndex(strings) - expected = DatetimeIndex(strings.astype("O")) - tm.assert_index_equal(result, expected) - - from_ints = DatetimeIndex(expected.asi8) - tm.assert_index_equal(from_ints, expected) - - # string with NaT - strings = np.array(["2000-01-01", "2000-01-02", "NaT"]) - result = DatetimeIndex(strings) - expected = DatetimeIndex(strings.astype("O")) - tm.assert_index_equal(result, expected) - - from_ints = DatetimeIndex(expected.asi8) - tm.assert_index_equal(from_ints, expected) - - # non-conforming - msg = ( - "Inferred frequency None from passed values does not conform " - "to passed frequency D" - ) - with pytest.raises(ValueError, match=msg): - DatetimeIndex(["2000-01-01", "2000-01-02", "2000-01-04"], freq="D") - - msg = ( - "Of the four parameters: start, end, periods, and freq, exactly " - "three must be specified" - ) - with pytest.raises(ValueError, match=msg): - date_range(start="2011-01-01", freq="b") - with pytest.raises(ValueError, match=msg): - date_range(end="2011-01-01", freq="B") - with pytest.raises(ValueError, match=msg): - date_range(periods=10, freq="D") - - @pytest.mark.parametrize("freq", ["AS", "W-SUN"]) - def test_constructor_datetime64_tzformat(self, freq): - # see GH#6572: ISO 8601 format results in stdlib timezone object - idx = date_range( - "2013-01-01T00:00:00-05:00", "2016-01-01T23:59:59-05:00", freq=freq - ) - expected = date_range( - "2013-01-01T00:00:00", - "2016-01-01T23:59:59", - freq=freq, - tz=timezone(timedelta(minutes=-300)), - ) - tm.assert_index_equal(idx, expected) - # Unable to use `US/Eastern` because of DST - expected_i8 = date_range( - "2013-01-01T00:00:00", "2016-01-01T23:59:59", freq=freq, tz="America/Lima" - ) - tm.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) - - idx = date_range( - "2013-01-01T00:00:00+09:00", "2016-01-01T23:59:59+09:00", freq=freq - ) - expected = date_range( - "2013-01-01T00:00:00", - "2016-01-01T23:59:59", - freq=freq, - tz=timezone(timedelta(minutes=540)), - ) - tm.assert_index_equal(idx, expected) - expected_i8 = date_range( - "2013-01-01T00:00:00", "2016-01-01T23:59:59", freq=freq, tz="Asia/Tokyo" - ) - tm.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) - - # Non ISO 8601 format results in dateutil.tz.tzoffset - idx = date_range("2013/1/1 0:00:00-5:00", "2016/1/1 23:59:59-5:00", freq=freq) - expected = date_range( - "2013-01-01T00:00:00", - "2016-01-01T23:59:59", - freq=freq, - tz=timezone(timedelta(minutes=-300)), - ) - tm.assert_index_equal(idx, expected) - # Unable to use `US/Eastern` because of DST - expected_i8 = date_range( - "2013-01-01T00:00:00", "2016-01-01T23:59:59", freq=freq, tz="America/Lima" - ) - tm.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) - - idx = date_range("2013/1/1 0:00:00+9:00", "2016/1/1 23:59:59+09:00", freq=freq) - expected = date_range( - "2013-01-01T00:00:00", - "2016-01-01T23:59:59", - freq=freq, - tz=timezone(timedelta(minutes=540)), - ) - tm.assert_index_equal(idx, expected) - expected_i8 = date_range( - "2013-01-01T00:00:00", "2016-01-01T23:59:59", freq=freq, tz="Asia/Tokyo" - ) - tm.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) - - def test_constructor_dtype(self): - # passing a dtype with a tz should localize - idx = DatetimeIndex( - ["2013-01-01", "2013-01-02"], dtype="datetime64[ns, US/Eastern]" - ) - expected = DatetimeIndex(["2013-01-01", "2013-01-02"]).tz_localize("US/Eastern") - tm.assert_index_equal(idx, expected) - - idx = DatetimeIndex(["2013-01-01", "2013-01-02"], tz="US/Eastern") - tm.assert_index_equal(idx, expected) - - def test_constructor_dtype_tz_mismatch_raises(self): - # if we already have a tz and its not the same, then raise - idx = DatetimeIndex( - ["2013-01-01", "2013-01-02"], dtype="datetime64[ns, US/Eastern]" - ) - - msg = ( - "cannot supply both a tz and a timezone-naive dtype " - r"\(i\.e\. datetime64\[ns\]\)" - ) - with pytest.raises(ValueError, match=msg): - DatetimeIndex(idx, dtype="datetime64[ns]") - - # this is effectively trying to convert tz's - msg = "data is already tz-aware US/Eastern, unable to set specified tz: CET" - with pytest.raises(TypeError, match=msg): - DatetimeIndex(idx, dtype="datetime64[ns, CET]") - msg = "cannot supply both a tz and a dtype with a tz" - with pytest.raises(ValueError, match=msg): - DatetimeIndex(idx, tz="CET", dtype="datetime64[ns, US/Eastern]") - - result = DatetimeIndex(idx, dtype="datetime64[ns, US/Eastern]") - tm.assert_index_equal(idx, result) - - @pytest.mark.parametrize("dtype", [object, np.int32, np.int64]) - def test_constructor_invalid_dtype_raises(self, dtype): - # GH 23986 - msg = "Unexpected value for 'dtype'" - with pytest.raises(ValueError, match=msg): - DatetimeIndex([1, 2], dtype=dtype) - - def test_constructor_name(self): - idx = date_range(start="2000-01-01", periods=1, freq="A", name="TEST") - assert idx.name == "TEST" - - def test_000constructor_resolution(self): - # 2252 - t1 = Timestamp((1352934390 * 1000000000) + 1000000 + 1000 + 1) - idx = DatetimeIndex([t1]) - - assert idx.nanosecond[0] == t1.nanosecond - - def test_disallow_setting_tz(self): - # GH 3746 - dti = DatetimeIndex(["2010"], tz="UTC") - msg = "Cannot directly set timezone" - with pytest.raises(AttributeError, match=msg): - dti.tz = pytz.timezone("US/Pacific") - - @pytest.mark.parametrize( - "tz", - [ - None, - "America/Los_Angeles", - pytz.timezone("America/Los_Angeles"), - Timestamp("2000", tz="America/Los_Angeles").tz, - ], - ) - def test_constructor_start_end_with_tz(self, tz): - # GH 18595 - start = Timestamp("2013-01-01 06:00:00", tz="America/Los_Angeles") - end = Timestamp("2013-01-02 06:00:00", tz="America/Los_Angeles") - result = date_range(freq="D", start=start, end=end, tz=tz) - expected = DatetimeIndex( - ["2013-01-01 06:00:00", "2013-01-02 06:00:00"], - tz="America/Los_Angeles", - freq="D", - ) - tm.assert_index_equal(result, expected) - # Especially assert that the timezone is consistent for pytz - assert pytz.timezone("America/Los_Angeles") is result.tz - - @pytest.mark.parametrize("tz", ["US/Pacific", "US/Eastern", "Asia/Tokyo"]) - def test_constructor_with_non_normalized_pytz(self, tz): - # GH 18595 - non_norm_tz = Timestamp("2010", tz=tz).tz - result = DatetimeIndex(["2010"], tz=non_norm_tz) - assert pytz.timezone(tz) is result.tz - - def test_constructor_timestamp_near_dst(self): - # GH 20854 - ts = [ - Timestamp("2016-10-30 03:00:00+0300", tz="Europe/Helsinki"), - Timestamp("2016-10-30 03:00:00+0200", tz="Europe/Helsinki"), - ] - result = DatetimeIndex(ts) - expected = DatetimeIndex([ts[0].to_pydatetime(), ts[1].to_pydatetime()]) - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize("klass", [Index, DatetimeIndex]) - @pytest.mark.parametrize("box", [np.array, partial(np.array, dtype=object), list]) - @pytest.mark.parametrize( - "tz, dtype", - [("US/Pacific", "datetime64[ns, US/Pacific]"), (None, "datetime64[ns]")], - ) - def test_constructor_with_int_tz(self, klass, box, tz, dtype): - # GH 20997, 20964 - ts = Timestamp("2018-01-01", tz=tz).as_unit("ns") - result = klass(box([ts._value]), dtype=dtype) - expected = klass([ts]) - assert result == expected - - def test_construction_int_rountrip(self, tz_naive_fixture): - # GH 12619, GH#24559 - tz = tz_naive_fixture - - result = 1293858000000000000 - expected = DatetimeIndex([result], tz=tz).asi8[0] - assert result == expected - - def test_construction_from_replaced_timestamps_with_dst(self): - # GH 18785 - index = date_range( - Timestamp(2000, 1, 1), - Timestamp(2005, 1, 1), - freq="MS", - tz="Australia/Melbourne", - ) - test = pd.DataFrame({"data": range(len(index))}, index=index) - test = test.resample("Y").mean() - result = DatetimeIndex([x.replace(month=6, day=1) for x in test.index]) - expected = DatetimeIndex( - [ - "2000-06-01 00:00:00", - "2001-06-01 00:00:00", - "2002-06-01 00:00:00", - "2003-06-01 00:00:00", - "2004-06-01 00:00:00", - "2005-06-01 00:00:00", - ], - tz="Australia/Melbourne", - ) - tm.assert_index_equal(result, expected) - - def test_construction_with_tz_and_tz_aware_dti(self): - # GH 23579 - dti = date_range("2016-01-01", periods=3, tz="US/Central") - msg = "data is already tz-aware US/Central, unable to set specified tz" - with pytest.raises(TypeError, match=msg): - DatetimeIndex(dti, tz="Asia/Tokyo") - - def test_construction_with_nat_and_tzlocal(self): - tz = dateutil.tz.tzlocal() - result = DatetimeIndex(["2018", "NaT"], tz=tz) - expected = DatetimeIndex([Timestamp("2018", tz=tz), pd.NaT]) - tm.assert_index_equal(result, expected) - - def test_constructor_with_ambiguous_keyword_arg(self): - # GH 35297 - - expected = DatetimeIndex( - ["2020-11-01 01:00:00", "2020-11-02 01:00:00"], - dtype="datetime64[ns, America/New_York]", - freq="D", - ambiguous=False, - ) - - # ambiguous keyword in start - timezone = "America/New_York" - start = Timestamp(year=2020, month=11, day=1, hour=1).tz_localize( - timezone, ambiguous=False - ) - result = date_range(start=start, periods=2, ambiguous=False) - tm.assert_index_equal(result, expected) - - # ambiguous keyword in end - timezone = "America/New_York" - end = Timestamp(year=2020, month=11, day=2, hour=1).tz_localize( - timezone, ambiguous=False - ) - result = date_range(end=end, periods=2, ambiguous=False) - tm.assert_index_equal(result, expected) - - def test_constructor_with_nonexistent_keyword_arg(self, warsaw): - # GH 35297 - timezone = warsaw - - # nonexistent keyword in start - start = Timestamp("2015-03-29 02:30:00").tz_localize( - timezone, nonexistent="shift_forward" - ) - result = date_range(start=start, periods=2, freq="H") - expected = DatetimeIndex( - [ - Timestamp("2015-03-29 03:00:00+02:00", tz=timezone), - Timestamp("2015-03-29 04:00:00+02:00", tz=timezone), - ] - ) - - tm.assert_index_equal(result, expected) - - # nonexistent keyword in end - end = Timestamp("2015-03-29 02:30:00").tz_localize( - timezone, nonexistent="shift_forward" - ) - result = date_range(end=end, periods=2, freq="H") - expected = DatetimeIndex( - [ - Timestamp("2015-03-29 01:00:00+01:00", tz=timezone), - Timestamp("2015-03-29 03:00:00+02:00", tz=timezone), - ] - ) - - tm.assert_index_equal(result, expected) - - def test_constructor_no_precision_raises(self): - # GH-24753, GH-24739 - - msg = "with no precision is not allowed" - with pytest.raises(ValueError, match=msg): - DatetimeIndex(["2000"], dtype="datetime64") - - msg = "The 'datetime64' dtype has no unit. Please pass in" - with pytest.raises(ValueError, match=msg): - Index(["2000"], dtype="datetime64") - - def test_constructor_wrong_precision_raises(self): - dti = DatetimeIndex(["2000"], dtype="datetime64[us]") - assert dti.dtype == "M8[us]" - assert dti[0] == Timestamp(2000, 1, 1) - - def test_index_constructor_with_numpy_object_array_and_timestamp_tz_with_nan(self): - # GH 27011 - result = Index(np.array([Timestamp("2019", tz="UTC"), np.nan], dtype=object)) - expected = DatetimeIndex([Timestamp("2019", tz="UTC"), pd.NaT]) - tm.assert_index_equal(result, expected) - - -class TestTimeSeries: - def test_dti_constructor_preserve_dti_freq(self): - rng = date_range("1/1/2000", "1/2/2000", freq="5min") - - rng2 = DatetimeIndex(rng) - assert rng.freq == rng2.freq - - def test_explicit_none_freq(self): - # Explicitly passing freq=None is respected - rng = date_range("1/1/2000", "1/2/2000", freq="5min") - - result = DatetimeIndex(rng, freq=None) - assert result.freq is None - - result = DatetimeIndex(rng._data, freq=None) - assert result.freq is None - - dta = DatetimeArray(rng, freq=None) - assert dta.freq is None - - def test_dti_constructor_years_only(self, tz_naive_fixture): - tz = tz_naive_fixture - # GH 6961 - rng1 = date_range("2014", "2015", freq="M", tz=tz) - expected1 = date_range("2014-01-31", "2014-12-31", freq="M", tz=tz) - - rng2 = date_range("2014", "2015", freq="MS", tz=tz) - expected2 = date_range("2014-01-01", "2015-01-01", freq="MS", tz=tz) - - rng3 = date_range("2014", "2020", freq="A", tz=tz) - expected3 = date_range("2014-12-31", "2019-12-31", freq="A", tz=tz) - - rng4 = date_range("2014", "2020", freq="AS", tz=tz) - expected4 = date_range("2014-01-01", "2020-01-01", freq="AS", tz=tz) - - for rng, expected in [ - (rng1, expected1), - (rng2, expected2), - (rng3, expected3), - (rng4, expected4), - ]: - tm.assert_index_equal(rng, expected) - - def test_dti_constructor_small_int(self, any_int_numpy_dtype): - # see gh-13721 - exp = DatetimeIndex( - [ - "1970-01-01 00:00:00.00000000", - "1970-01-01 00:00:00.00000001", - "1970-01-01 00:00:00.00000002", - ] - ) - - arr = np.array([0, 10, 20], dtype=any_int_numpy_dtype) - tm.assert_index_equal(DatetimeIndex(arr), exp) - - def test_ctor_str_intraday(self): - rng = DatetimeIndex(["1-1-2000 00:00:01"]) - assert rng[0].second == 1 - - def test_is_(self): - dti = date_range(start="1/1/2005", end="12/1/2005", freq="M") - assert dti.is_(dti) - assert dti.is_(dti.view()) - assert not dti.is_(dti.copy()) - - def test_index_cast_datetime64_other_units(self): - arr = np.arange(0, 100, 10, dtype=np.int64).view("M8[D]") - idx = Index(arr) - - assert (idx.values == astype_overflowsafe(arr, dtype=np.dtype("M8[ns]"))).all() - - def test_constructor_int64_nocopy(self): - # GH#1624 - arr = np.arange(1000, dtype=np.int64) - index = DatetimeIndex(arr) - - arr[50:100] = -1 - assert (index.asi8[50:100] == -1).all() - - arr = np.arange(1000, dtype=np.int64) - index = DatetimeIndex(arr, copy=True) - - arr[50:100] = -1 - assert (index.asi8[50:100] != -1).all() - - @pytest.mark.parametrize( - "freq", ["M", "Q", "A", "D", "B", "BH", "T", "S", "L", "U", "H", "N", "C"] - ) - def test_from_freq_recreate_from_data(self, freq): - org = date_range(start="2001/02/01 09:00", freq=freq, periods=1) - idx = DatetimeIndex(org, freq=freq) - tm.assert_index_equal(idx, org) - - org = date_range( - start="2001/02/01 09:00", freq=freq, tz="US/Pacific", periods=1 - ) - idx = DatetimeIndex(org, freq=freq, tz="US/Pacific") - tm.assert_index_equal(idx, org) - - def test_datetimeindex_constructor_misc(self): - arr = ["1/1/2005", "1/2/2005", "Jn 3, 2005", "2005-01-04"] - msg = r"(\(')?Unknown datetime string format(:', 'Jn 3, 2005'\))?" - with pytest.raises(ValueError, match=msg): - DatetimeIndex(arr) - - arr = ["1/1/2005", "1/2/2005", "1/3/2005", "2005-01-04"] - idx1 = DatetimeIndex(arr) - - arr = [datetime(2005, 1, 1), "1/2/2005", "1/3/2005", "2005-01-04"] - idx2 = DatetimeIndex(arr) - - arr = [Timestamp(datetime(2005, 1, 1)), "1/2/2005", "1/3/2005", "2005-01-04"] - idx3 = DatetimeIndex(arr) - - arr = np.array(["1/1/2005", "1/2/2005", "1/3/2005", "2005-01-04"], dtype="O") - idx4 = DatetimeIndex(arr) - - idx5 = DatetimeIndex(["12/05/2007", "25/01/2008"], dayfirst=True) - idx6 = DatetimeIndex( - ["2007/05/12", "2008/01/25"], dayfirst=False, yearfirst=True - ) - tm.assert_index_equal(idx5, idx6) - - for other in [idx2, idx3, idx4]: - assert (idx1.values == other.values).all() - - sdate = datetime(1999, 12, 25) - edate = datetime(2000, 1, 1) - idx = date_range(start=sdate, freq="1B", periods=20) - assert len(idx) == 20 - assert idx[0] == sdate + 0 * offsets.BDay() - assert idx.freq == "B" - - idx1 = date_range(start=sdate, end=edate, freq="W-SUN") - idx2 = date_range(start=sdate, end=edate, freq=offsets.Week(weekday=6)) - assert len(idx1) == len(idx2) - assert idx1.freq == idx2.freq - - idx1 = date_range(start=sdate, end=edate, freq="QS") - idx2 = date_range( - start=sdate, end=edate, freq=offsets.QuarterBegin(startingMonth=1) - ) - assert len(idx1) == len(idx2) - assert idx1.freq == idx2.freq - - idx1 = date_range(start=sdate, end=edate, freq="BQ") - idx2 = date_range( - start=sdate, end=edate, freq=offsets.BQuarterEnd(startingMonth=12) - ) - assert len(idx1) == len(idx2) - assert idx1.freq == idx2.freq - - def test_pass_datetimeindex_to_index(self): - # Bugs in #1396 - rng = date_range("1/1/2000", "3/1/2000") - idx = Index(rng, dtype=object) - - expected = Index(rng.to_pydatetime(), dtype=object) - - tm.assert_numpy_array_equal(idx.values, expected.values) - - def test_date_range_tuple_freq_raises(self): - # GH#34703 - edate = datetime(2000, 1, 1) - with pytest.raises(TypeError, match="pass as a string instead"): - date_range(end=edate, freq=("D", 5), periods=20) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/common/test_float.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/common/test_float.py deleted file mode 100644 index 2ca98de914f9ef24030876c8fe46b96a986e516e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/common/test_float.py +++ /dev/null @@ -1,65 +0,0 @@ -""" -Tests that work on both the Python and C engines but do not have a -specific classification into the other test modules. -""" -from io import StringIO - -import numpy as np -import pytest - -from pandas.compat import is_platform_linux - -from pandas import DataFrame -import pandas._testing as tm - -pytestmark = pytest.mark.usefixtures("pyarrow_skip") - - -def test_float_parser(all_parsers): - # see gh-9565 - parser = all_parsers - data = "45e-1,4.5,45.,inf,-inf" - result = parser.read_csv(StringIO(data), header=None) - - expected = DataFrame([[float(s) for s in data.split(",")]]) - tm.assert_frame_equal(result, expected) - - -def test_scientific_no_exponent(all_parsers_all_precisions): - # see gh-12215 - df = DataFrame.from_dict({"w": ["2e"], "x": ["3E"], "y": ["42e"], "z": ["632E"]}) - data = df.to_csv(index=False) - parser, precision = all_parsers_all_precisions - - df_roundtrip = parser.read_csv(StringIO(data), float_precision=precision) - tm.assert_frame_equal(df_roundtrip, df) - - -@pytest.mark.parametrize("neg_exp", [-617, -100000, -99999999999999999]) -def test_very_negative_exponent(all_parsers_all_precisions, neg_exp): - # GH#38753 - parser, precision = all_parsers_all_precisions - - data = f"data\n10E{neg_exp}" - result = parser.read_csv(StringIO(data), float_precision=precision) - expected = DataFrame({"data": [0.0]}) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("exp", [999999999999999999, -999999999999999999]) -def test_too_many_exponent_digits(all_parsers_all_precisions, exp, request): - # GH#38753 - parser, precision = all_parsers_all_precisions - data = f"data\n10E{exp}" - result = parser.read_csv(StringIO(data), float_precision=precision) - if precision == "round_trip": - if exp == 999999999999999999 and is_platform_linux(): - mark = pytest.mark.xfail(reason="GH38794, on Linux gives object result") - request.node.add_marker(mark) - - value = np.inf if exp > 0 else 0.0 - expected = DataFrame({"data": [value]}) - else: - expected = DataFrame({"data": [f"10E{exp}"]}) - - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/formatters/html.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/formatters/html.py deleted file mode 100644 index 47f5d9c17fbfed747f57cedb697c07f43f451604..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/formatters/html.py +++ /dev/null @@ -1,983 +0,0 @@ -""" - pygments.formatters.html - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for HTML output. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import functools -import os -import sys -import os.path -from io import StringIO - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Token, Text, STANDARD_TYPES -from pip._vendor.pygments.util import get_bool_opt, get_int_opt, get_list_opt - -try: - import ctags -except ImportError: - ctags = None - -__all__ = ['HtmlFormatter'] - - -_escape_html_table = { - ord('&'): '&', - ord('<'): '<', - ord('>'): '>', - ord('"'): '"', - ord("'"): ''', -} - - -def escape_html(text, table=_escape_html_table): - """Escape &, <, > as well as single and double quotes for HTML.""" - return text.translate(table) - - -def webify(color): - if color.startswith('calc') or color.startswith('var'): - return color - else: - return '#' + color - - -def _get_ttype_class(ttype): - fname = STANDARD_TYPES.get(ttype) - if fname: - return fname - aname = '' - while fname is None: - aname = '-' + ttype[-1] + aname - ttype = ttype.parent - fname = STANDARD_TYPES.get(ttype) - return fname + aname - - -CSSFILE_TEMPLATE = '''\ -/* -generated by Pygments -Copyright 2006-2021 by the Pygments team. -Licensed under the BSD license, see LICENSE for details. -*/ -%(styledefs)s -''' - -DOC_HEADER = '''\ - - - - - %(title)s - - - - -

            %(title)s

            - -''' - -DOC_HEADER_EXTERNALCSS = '''\ - - - - - %(title)s - - - - -

            %(title)s

            - -''' - -DOC_FOOTER = '''\ - - -''' - - -class HtmlFormatter(Formatter): - r""" - Format tokens as HTML 4 ```` tags within a ``
            `` tag, wrapped
            -    in a ``
            `` tag. The ``
            ``'s CSS class can be set by the `cssclass` - option. - - If the `linenos` option is set to ``"table"``, the ``
            `` is
            -    additionally wrapped inside a ```` which has one row and two
            -    cells: one containing the line numbers and one containing the code.
            -    Example:
            -
            -    .. sourcecode:: html
            -
            -        
            -
            - - -
            -
            1
            -            2
            -
            -
            def foo(bar):
            -              pass
            -            
            -
            - - (whitespace added to improve clarity). - - Wrapping can be disabled using the `nowrap` option. - - A list of lines can be specified using the `hl_lines` option to make these - lines highlighted (as of Pygments 0.11). - - With the `full` option, a complete HTML 4 document is output, including - the style definitions inside a ``