diff --git a/spaces/101-5/gpt4free/SECURITY.md b/spaces/101-5/gpt4free/SECURITY.md deleted file mode 100644 index cbc69677a0ec6b0192f1bd61f3eccb7723f8827b..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/SECURITY.md +++ /dev/null @@ -1,4 +0,0 @@ -## Reporting a Vulnerability - -Reporting a Vulnerability -Please report (suspected) security vulnerabilities to https://t.me/xtekky. You will receive a response within 48 hours. If the issue is confirmed, we will release a patch as soon as possible depending on complexity but historically within a few days. diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Arrival (English) Telugu Movie Video Songs Hd 1080p Watch and Listen to the Amazing Soundtrack of the Sci-Fi Film.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Arrival (English) Telugu Movie Video Songs Hd 1080p Watch and Listen to the Amazing Soundtrack of the Sci-Fi Film.md deleted file mode 100644 index da3fe4ddcc06d60a2134accfeba16dfe2449dae5..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Arrival (English) Telugu Movie Video Songs Hd 1080p Watch and Listen to the Amazing Soundtrack of the Sci-Fi Film.md +++ /dev/null @@ -1,87 +0,0 @@ -
-

Panda Antivirus Pro v17.0.1 Final Crack: What You Need to Know

-

If you are looking for a reliable and powerful antivirus software for your PC, you might have come across Panda Antivirus Pro v17.0.1, one of the latest versions of the popular security product from Panda Security. Panda Antivirus Pro v17.0.1 offers comprehensive protection against all kinds of online threats, such as viruses, malware, ransomware, phishing, and more. It also comes with a range of features that enhance your privacy and performance, such as firewall, VPN, Wi-Fi protection, parental control, data shield, optimization and cleanup tools, and more.

-

Panda.Antivirus.Pro.v17.0.1.Final..rar crack


Download ===> https://byltly.com/2uKvUW



-

However, if you are tempted to download a cracked version of Panda Antivirus Pro v17.0.1 from some shady website or torrent site, you might want to think twice before doing so. A crack is a program that modifies or bypasses the original software's license verification or activation process, allowing you to use it for free or with unlimited features. While this might sound like a good deal, using a cracked version of Panda Antivirus Pro v17.0.1 can expose you to various risks and disadvantages that outweigh any potential benefits.

-

Features of Panda Antivirus Pro v17.0.1

-

Panda Antivirus Pro v17.0.1 is a comprehensive security solution that protects your PC from all kinds of online threats. Some of its features include:

- -

Risks of Using a Cracked Version of Panda Antivirus Pro v17.0.1

-

While using a cracked version of Panda Antivirus Pro v17.0.1 might seem like a convenient way to save money or get more features, it can also expose you to various risks and disadvantages that can compromise your security, performance, legality, and ethics.

- - - - - - -
RiskDescription
Legal issues and penalties for software piracy:Using a cracked version of Panda Antivirus Pro v17.0.1 is considered software piracy, which is illegal in most countries and can result in fines or even jail time if caught.
Security threats and vulnerabilities from malware-infected cracks:Many cracks are infected with malware themselves or contain hidden backdoors that can allow hackers to access your PC or data without your knowledge or consent.
Performance issues and compatibility problems from outdated or modified cracks:Many cracks are outdated or modified versions of the original software that can cause errors, crashes, or conflicts with other programs or system updates.
Ethical issues and unfairness to the developers of Panda Antivirus Pro:Using a cracked version of Panda Antivirus Pro v17.0.1 is unethical and unfair to the developers who spent time and money creating the software and providing updates and support.
-

Alternatives to Using a Cracked Version of Panda Antivirus Pro v17.0.1

-

If you want to use Panda Antivirus Pro v17.0.1 without risking any of the above-mentioned issues, there are some alternatives that you can consider instead of using a crack.

- -

Conclusion

-

Panda Antivirus Pro v17.0.1 is a comprehensive security solution that protects your PC from all kinds of online threats and enhances your privacy and performance with various features.

-

Panda Antivirus Pro 17.0.1 Final full version download
-How to crack Panda Antivirus Pro 17.0.1 Final rar file
-Panda Antivirus Pro 17.0.1 Final license key generator
-Panda Antivirus Pro 17.0.1 Final activation code free
-Panda Antivirus Pro 17.0.1 Final patch download
-Panda Antivirus Pro 17.0.1 Final serial number crack
-Panda Antivirus Pro 17.0.1 Final keygen torrent
-Panda Antivirus Pro 17.0.1 Final cracked software download
-Panda Antivirus Pro 17.0.1 Final rar password remover
-Panda Antivirus Pro 17.0.1 Final registration code crack
-Panda Antivirus Pro 17.0.1 Final product key crack
-Panda Antivirus Pro 17.0.1 Final crack download for windows 10
-Panda Antivirus Pro 17.0.1 Final crack download for mac
-Panda Antivirus Pro 17.0.1 Final crack download for linux
-Panda Antivirus Pro 17.0.1 Final crack download for android
-Panda Antivirus Pro 17.0.1 Final portable version download
-Panda Antivirus Pro 17.0.1 Final offline installer download
-Panda Antivirus Pro 17.0.1 Final latest update download
-Panda Antivirus Pro 17.0.1 Final premium features unlock
-Panda Antivirus Pro 17.0.1 Final lifetime activation crack
-Panda Antivirus Pro 17.0.1 Final malware removal tool crack
-Panda Antivirus Pro 17.0.1 Final virus protection crack
-Panda Antivirus Pro 17.0.1 Final firewall crack
-Panda Antivirus Pro 17.0.1 Final VPN crack
-Panda Antivirus Pro 17.0.1 Final parental control crack
-Panda Antivirus Pro 17.0.1 Final data recovery crack
-Panda Antivirus Pro 17.0.1 Final system optimizer crack
-Panda Antivirus Pro 17.0.1 Final identity protection crack
-Panda Antivirus Pro 17.0.1 Final ransomware protection crack
-Panda Antivirus Pro 17.0.1 Final phishing protection crack
-Panda Antivirus Pro 17.0.1 Final webcam protection crack
-Panda Antivirus Pro 17.0.1 Final password manager crack
-Panda Antivirus Pro 17.0.1 Final file shredder crack
-Panda Antivirus Pro 17.0.1 Final file encryption crack
-Panda Antivirus Pro 17.0.1 Final safe browsing crack
-Panda Antivirus Pro 17.0.1 Final game mode crack
-Panda Antivirus Pro 17.0

-

However, using a cracked version of Panda Antivirus Pro v17.0.1 can expose you to various risks and disadvantages that can compromise your security, performance, legality, and ethics.

-

The best way to use Panda Antivirus Pro v17.0.1 is to buy a legitimate license from the official website or an authorized reseller. Alternatively, you can use a free trial or a free version of Panda Antivirus Pro v17.0. I have already written the article on the topic you provided. Here is the rest of the article with HTML formatting.

1 or another free or paid antivirus software that suits your needs.

-

We hope this article has helped you understand what you need to know about Panda Antivirus Pro v17.0.1 and its crack. If you have any questions or comments, feel free to leave them below.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Custom Layouts with Tych Panel 2 Full Version for Photoshop CC.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Custom Layouts with Tych Panel 2 Full Version for Photoshop CC.md deleted file mode 100644 index d48966981f1d9d4892eae63ceb6a5aa473ec6a66..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Custom Layouts with Tych Panel 2 Full Version for Photoshop CC.md +++ /dev/null @@ -1,120 +0,0 @@ -
-

James Cameron's Avatar: The Game Reloaded Serial Crack

-

Introduction

-

If you are a fan of James Cameron's epic sci-fi movie Avatar, you might want to play the video game adaptation of it. James Cameron's Avatar: The Game is a third-person action-adventure game that lets you experience the stunning world of Pandora and its inhabitants. You can choose to fight for the human invaders or the native Na'vi, and explore a rich and diverse environment full of exotic creatures and plants.

-

james cameron's avatar the game reloaded serial crack


Download File ☆☆☆ https://byltly.com/2uKvNj



-

However, playing this game is not as easy as it sounds. You need a serial crack to activate the game and bypass the online verification process. Otherwise, you will be stuck at the activation screen and unable to enjoy the game. This is where Reloaded Serial Crack comes in handy. In this article, we will show you what Reloaded Serial Crack is, why you need it, how to get it, and some tips and tricks for playing Avatar: The Game.

-

What is James Cameron's Avatar: The Game?

-

James Cameron's Avatar: The Game is a video game based on the 2009 blockbuster movie Avatar, directed by James Cameron. The game was developed by Ubisoft Montreal and released in 2009 for Windows, PlayStation 3, Xbox 360, Wii, PSP, Nintendo DS, and iOS devices.

-

The game is set in 2152, two years before the events of the movie. You play as either a soldier of the Resources Development Administration (RDA), a corporation that wants to exploit Pandora's resources, or a member of the Na'vi, a race of blue-skinned humanoid aliens that live in harmony with nature. You can switch between these two factions at any time during the game.

-

The game features a nonlinear storyline that changes depending on your choices and actions. You can also customize your character's appearance, weapons, skills, and abilities. The game has both single-player and multiplayer modes, where you can cooperate or compete with other players online.

-

What is Reloaded Serial Crack?

-

Reloaded Serial Crack is a software tool that allows you to activate James Cameron's Avatar: The Game without having to go through the online verification process. The game requires you to enter a unique activation key that matches your hardware ID, which is generated by the game installer based on your computer specifications. However, this activation key can only be obtained from Ubisoft's official website, which is no longer available.

-

Reloaded Serial Crack solves this problem by generating a valid activation key for any hardware ID. It also cracks the game files so that you can play the game offline without any internet connection. Reloaded Serial Crack was created by Reloaded, a group of hackers that specializes in cracking video games.

-

Why do you need Reloaded Serial Crack for Avatar: The Game?

-

You need Reloaded Serial Crack for Avatar: The Game if you want to play the game without any hassle. Without Reloaded Serial Crack, you will not be able to activate the game and play it. You will also miss out on some features and updates that are only available in version 1.02 of the game.

-

avatar the game reloaded crack download
-james cameron's avatar pc game serial key
-how to install avatar the game reloaded
-avatar the game reloaded activation code
-james cameron's avatar the game crack only
-avatar the game reloaded system requirements
-james cameron's avatar pc game reloaded torrent
-avatar the game reloaded free full version
-james cameron's avatar the game keygen generator
-avatar the game reloaded gameplay
-james cameron's avatar the game patch download
-avatar the game reloaded iso file
-james cameron's avatar the game license key
-avatar the game reloaded cheats codes
-james cameron's avatar the game trainer download
-avatar the game reloaded online multiplayer
-james cameron's avatar the game mods
-avatar the game reloaded rar password
-james cameron's avatar the game steam
-avatar the game reloaded error fix
-james cameron's avatar the game review
-avatar the game reloaded windows 10 compatibility
-james cameron's avatar the game walkthrough
-avatar the game reloaded skidrow crack
-james cameron's avatar the game soundtrack
-avatar the game reloaded direct link
-james cameron's avatar the game ps3 iso
-avatar the game reloaded xbox 360 controller support
-james cameron's avatar the game xbox 360 download
-avatar the game reloaded save file location
-james cameron's avatar the game pc requirements
-avatar the game reloaded no cd crack
-james cameron's avatar the game pc gameplay
-avatar the game reloaded update download
-james cameron's avatar the game pc download highly compressed
-avatar the game reloaded registration code generator
-james cameron's avatar the game pc controls
-avatar the game reloaded unlock code free
-james cameron's avatar the game pc cheats
-avatar the game reloaded graphics settings
-james cameron's avatar the game pc mods
-avatar the game reloaded offline activation keygen download
-james cameron's avatar the game pc patch 1.02 download
-avatar the game reloaded crack only download free full version pc games torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eutorrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.comtorrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.nettorrents.me

-

With Reloaded Serial Crack, you can enjoy the following benefits:

- -

How to get Reloaded Serial Crack for Avatar: The Game?

-

Getting Reloaded Serial Crack for Avatar: The Game is not difficult if you follow these steps:

-

Download the game from a trusted source

-

The first step is to download James Cameron's Avatar: The Game from a trusted source. You can find many websites that offer free downloads of pirated games, but be careful as some of them may contain viruses or malware that can harm your computer. We recommend using ElAmigos official site, which provides a safe and reliable download link for James Cameron's Avatar: The Game ElAmigos release.

-

The ElAmigos release is already cracked after installation (crack/keygen by Reloaded). It also includes all languages and updates up to version 1.02. The upload size is 2.77GB and you can choose between RAR parts or ISO image format.

-

Install the game and update it to version 1.02

-

The next step is to install James Cameron's Avatar: The Game on your computer. To do this, you need to extract the RAR parts or mount the ISO image using a software like WinRAR or Daemon Tools Lite. Then, run the setup.exe file and follow the instructions on screen.

-

After installing the game, you need to update it to version 1.02. This will fix some bugs and improve some graphics in the game. To update the game, run patch.exe file from Update folder inside ISO image or extracted folder.

-

Launch the game and choose manual activation

-

The third step is to launch James Cameron's Avatar: The Game from your desktop shortcut or start menu. During the first launch, you will see an activation window that asks you to register online or manually. Select manual activation option as online activation is no longer possible.

-

You will then see your hardware ID displayed on screen. This is a unique code that identifies your computer based on its specifications. You need this code to generate an activation key using Reloaded Serial Crack.

-

Use the keygen to generate an activation key

-

The fourth step is to use Reloaded Serial Crack (keygen) to generate an activation key for your hardware ID. To do this, you need to open keygen.exe file from Keygen folder inside ISO image or extracted folder.

-

Then, copy your hardware ID from the game's activation window and paste it into Keygen field in keygen.exe file. Click Generate button and you will get an activation key displayed on screen.

-

Enter the activation key in the game's activation window

-

The final step is to enter the activation key in the game's activation window. To do this, you need to copy the activation key from keygen.exe file and paste it into Activation Key field in the game's activation window. Click Activate button and the game will launch automatically. You need to do this only once, after that you can delete the keygen.exe file.

-

Tips and tricks for playing Avatar: The Game

-

Now that you have activated James Cameron's Avatar: The Game, you can start playing it and have fun. Here are some tips and tricks for playing Avatar: The Game:

-

Choose your faction: RDA or Na'vi

-

The first choice you have to make in the game is which faction you want to join: the RDA or the Na'vi. This will affect your storyline, your gameplay, and your character development. The RDA are the human invaders who use advanced technology and weapons to exploit Pandora's resources. The Na'vi are the native aliens who use bows, spears, and animals to defend their homeland. these two factions at any time during the game, but be aware that your actions will have consequences and affect your reputation with each side.

-

Customize your character and skills

-

The second choice you have to make in the game is how to customize your character and skills. You can choose from different classes, such as soldier, infiltrator, commando, or scientist for the RDA, or warrior, hunter, shaman, or scout for the Na'vi. Each class has its own strengths and weaknesses, as well as unique weapons and abilities.

-

You can also upgrade your skills by earning experience points (XP) and spending them on skill trees. There are four skill trees for each faction: combat, stealth, survival, and support for the RDA, and combat, stealth, nature, and spirit for the Na'vi. You can mix and match skills from different trees to create your own playstyle.

-

Explore the beautiful world of Pandora

-

The third thing you can do in the game is to explore the beautiful world of Pandora. Pandora is a rich and diverse environment full of exotic creatures and plants. You can interact with many of them, either as allies or enemies. You can also ride some of them, such as direhorses, banshees, or leonopteryxes.

-

Pandora is also full of secrets and hidden areas that you can discover by using your scanner or your senses. You can find collectibles, such as cell samples, artifacts, or logs that will give you more information about the world and its history. You can also find resources and items that you can use to craft new weapons and equipment.

-

Complete missions and side quests

-

The fourth thing you can do in the game is to complete missions and side quests. Missions are the main objectives that advance the story and change depending on your faction and choices. Side quests are optional tasks that you can do to earn extra XP, resources, items, or reputation.

-

You can find missions and side quests by talking to NPCs or checking your map. Some missions and side quests are faction-specific, while others are shared by both sides. Some missions and side quests are also time-sensitive or have branching outcomes. You can track your progress and objectives by using your HUD or your menu.

-

Collect resources and items

-

The fifth thing you can do in the game is to collect resources and items. Resources are materials that you can use to craft new weapons and equipment. You can find resources by scanning plants or animals, looting enemies or containers, or mining deposits. You can craft weapons and equipment by using workbenches or vendors.

-

Items are consumables that you can use to enhance your performance or heal yourself. You can find items by scanning plants or animals, looting enemies or containers, or buying them from vendors. You can use items by accessing your inventory or using hotkeys.

-

Conclusion

-

Summary of the main points

-

In conclusion, James Cameron's Avatar: The Game is a fun and immersive game that lets you experience the stunning world of Pandora and its inhabitants. However, to play this game, you need Reloaded Serial Crack to activate it and bypass the online verification process. To get Reloaded Serial Crack, you need to download the game from a trusted source, install it and update it to version 1.02, launch it and choose manual activation, use the keygen to generate an activation key, and enter it in the game's activation window.

-

Call to action

-

If you are ready to play James Cameron's Avatar: The Game with Reloaded Serial Crack, don't wait any longer. Follow the steps we have shown you in this article and start your adventure on Pandora today. You won't regret it!

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Elijah Blakes Drift Album in Zip Format.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Elijah Blakes Drift Album in Zip Format.md deleted file mode 100644 index 11b65cfc5ff4443a0fce67624858e14dec29e1e7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Elijah Blakes Drift Album in Zip Format.md +++ /dev/null @@ -1,6 +0,0 @@ -

Elijah blake drift download zip


DOWNLOAD 🔗 https://imgfil.com/2uxXS0



- - aaccfb2cb3
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Secrets of Ashfall a New Post-Apocalyptic MMORPG.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Secrets of Ashfall a New Post-Apocalyptic MMORPG.md deleted file mode 100644 index 1965c3ac6bc8d3f3c2fcec59f22dac6634837c06..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Secrets of Ashfall a New Post-Apocalyptic MMORPG.md +++ /dev/null @@ -1,114 +0,0 @@ - -

Ashfall Game: A Post-Apocalyptic Shooter MMORPG You Need to Play

-

If you are a fan of post-apocalyptic games, you might have heard of Ashfall, a new shooter MMORPG that is set to release in 2023. Ashfall is a game that promises to deliver an epic and immersive experience in a world that has been devastated by a nuclear war. In this article, we will tell you everything you need to know about Ashfall game, including what it is, why you should play it, and how to play it.

-

ashfall game


DOWNLOAD >>> https://urlin.us/2uSZ8G



-

What is Ashfall Game?

-

Ashfall is a post-apocalyptic shooter MMORPG developed by Legendary Star Studio, a subsidiary of NetEase Games. It is a game that combines elements of shooting, role-playing, exploration, crafting, base-building, and more. In Ashfall, you will play as a survivor who must leave the Vault to find the Core of Creation—the key to saving the world.

-

The Story and Setting of Ashfall Game

-

The story of Ashfall takes place in the future, when AI rises up and launches a nuclear war against humanity. After that, nothing other than ruins are left in the world. You are one of the few survivors who live in a Vault, a safe haven that protects you from the harsh environment outside. However, one day, you receive a mysterious message that tells you to find the Core of Creation, a device that can restore the world to its former glory. You decide to leave the Vault and embark on a perilous journey across the wasteland.

-

The setting of Ashfall is a vast and diverse world that is full of surprises and dangers. You will encounter various landscapes, such as snow plains, deserts, forests, swamps, and cities. You will also meet different creatures and factions, such as giant worms, talking rabbits, humanoid traders, robots, mutants, rebels, and more. You will discover the secrets and stories of this broken world as you explore it.

-

The Gameplay and Features of Ashfall Game

-

The gameplay of Ashfall is based on four pillars: shooting, role-playing, exploration, and crafting. You will be able to customize your character's appearance, skills, equipment, and gadgets. You will be able to use various weapons and abilities to fight against enemies and bosses. You will be able to explore the world and collect resources and items. You will be able to craft your own equipment, gadgets, mounts, and base.

-

Some of the features of Ashfall game are:

-

Ashfall game release date
-Ashfall game trailer
-Ashfall game review
-Ashfall game download
-Ashfall game steam
-Ashfall game gameplay
-Ashfall game wiki
-Ashfall game system requirements
-Ashfall game beta
-Ashfall game reddit
-Ashfall game soundtrack
-Ashfall game weapons
-Ashfall game mounts
-Ashfall game base building
-Ashfall game companions
-Ashfall game skills
-Ashfall game gadgets
-Ashfall game tips
-Ashfall game secrets
-Ashfall game lore
-Ashfall game vaults
-Ashfall game mutants
-Ashfall game robots
-Ashfall game cities
-Ashfall game civilizations
-Ashfall game post-apocalyptic world
-Ashfall game nuclear war
-Ashfall game AI
-Ashfall game core of creation
-Ashfall game solo adventure
-Ashfall game multiplayer experience
-Ashfall game crossplay
-Ashfall game legendary star studio
-Ashfall game netease games
-Ashfall game hans zimmer
-Ashfall game steve mazzaro
-Ashfall game inon zur
-How to play ashfall game
-How to download ashfall game for free
-How to join ashfall game discord server
-How to craft equipment in ashfall game
-How to tame mounts in ashfall game
-How to build a base in ashfall game
-How to recruit companions in ashfall game
-How to discover skills in ashfall game
-How to use gadgets in ashfall game
-How to fight giants in ashfall game
-How to explore the wasteland in ashfall game
-How to save the world in ashfall game

- -

Why Should You Play Ashfall Game?

-

There are many reasons why you should play Ashfall game. Here are some of them:

-

A Stunning and Immersive World

-

Ashfall game boasts a stunning and immersive world that is powered by Unreal Engine 4. The graphics are realistic and detailed, creating a vivid atmosphere for the game. The world is also dynamic and interactive, meaning that it changes according to your actions and choices. For example, you can trigger environmental disasters such as sandstorms, blizzards, or acid rains, and see how they affect the world and the gameplay. You can also interact with various objects and NPCs in the world, such as shooting barrels, hacking terminals, or trading with merchants.

-

A Thrilling and Diverse Adventure

-

Ashfall game offers a thrilling and diverse adventure that will keep you hooked for hours. The game has a rich and branching storyline that is influenced by your decisions and actions. You can choose to follow the main quest or explore the side quests and hidden events. You can also choose to ally with different factions or go solo. The game has multiple endings that depend on your choices and consequences.

-

The game also has a variety of gameplay modes that cater to different preferences and moods. You can play solo or co-op with up to four players. You can also join PvP battles or PvE raids with other players. You can also participate in seasonal events and challenges that offer unique rewards and experiences.

-

A Musical Feast in a Forlorn World

-

Ashfall game features a musical feast in a forlorn world that will touch your soul. The game has an original soundtrack composed by renowned musicians, such as Hans Zimmer, Junkie XL, and Ramin Djawadi. The music is diverse and fitting for the different scenes and emotions of the game. The music is also interactive, meaning that it changes according to your actions and situations. For example, the music will become more intense when you are in combat, or more soothing when you are in your base.

-

A Crossplay Experience for Everyone

-

Ashfall game is a crossplay experience for everyone, meaning that you can play it on different platforms and devices with other players. The game supports crossplay between PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. You can also switch between devices without losing your progress or data. The game also has a cloud save feature that allows you to access your account from anywhere.

-

How to Play Ashfall Game?

-

If you are interested in playing Ashfall game, here are some things you need to know:

-

The Platforms and Release Date of Ashfall Game

-

Ashfall game is scheduled to release in 2023 for PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. The game will be available on Steam, Epic Games Store, PlayStation Store, Microsoft Store, App Store, and Google Play Store. The game will also have a beta testing phase before the official launch.

-

The System Requirements and Price of Ashfall Game

-

The system requirements and price of Ashfall game are not yet announced by the developers. However, based on the graphics and features of the game, we can expect that the game will require a high-end PC or console to run smoothly. The game will also likely have a premium price tag, as it is a AAA title with high production value.

-

The Tips and Tricks for Ashfall Game

-

Here are some tips and tricks for Ashfall game that might help you enjoy the game better:

- -

Conclusion

-

Ashfall game is a post-apocalyptic shooter MMORPG that you need to play if you love this genre. The game has a stunning and immersive world, a thrilling and diverse adventure, a musical feast in a forlorn world, and a crossplay experience for everyone. The game is set to release in 2023 for PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. You can pre-register for the beta testing phase on the official website of the game.

-

FAQs

-

Here are some frequently asked questions about Ashfall game:

-

What is the Core of Creation?

-

The Core of Creation is a device that can restore the world to its former glory. It is the ultimate goal of your journey in Ashfall game. However, you are not the only one who is looking for it. You will face many enemies and challenges along the way.

-

How long is the game?

-

The game length of Ashfall game depends on how you play it. If you focus on the main quest, you can finish the game in about 20 hours. However, if you explore the world and do the side quests, you can extend the game time to over 100 hours. The game also has replay value, as you can try different choices and endings.

-

Is the game online or offline?

-

The game is both online and offline. You can play the game solo or co-op with up to four players. You can also join PvP battles or PvE raids with other players. However, you can also play the game offline without an internet connection. You can switch between online and offline modes anytime you want.

-

What are the gadgets in the game?

-

The gadgets are devices that you can craft and use in the game. They have various functions and effects, such as scouting, healing, attacking, defending, or hacking. You can craft gadgets using resources and items that you find in the world. You can also upgrade and customize your gadgets to suit your needs.

-

Can I play the game on mobile devices?

-

Yes, you can play the game on mobile devices. The game supports crossplay between PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. You can also switch between devices without losing your progress or data. The game also has a cloud save feature that allows you to access your account from anywhere.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Mkhathazi Songs for Free - The Best of Maskandi Music.md b/spaces/1phancelerku/anime-remove-background/Download Mkhathazi Songs for Free - The Best of Maskandi Music.md deleted file mode 100644 index 34e23f159e5f3237bc189be68dbb7a01a4cc34c1..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Mkhathazi Songs for Free - The Best of Maskandi Music.md +++ /dev/null @@ -1,108 +0,0 @@ - -

Download Mkhathazi Songs: How to Enjoy the Best of Maskandi Music

-

If you are a fan of traditional Zulu music, you have probably heard of maskandi music. Maskandi is a genre of music that originated in the rural areas of KwaZulu-Natal, South Africa. It is characterized by the use of acoustic guitars, concertinas, harmonicas, and percussion instruments. Maskandi music reflects the culture and experiences of the Zulu people, often dealing with topics such as love, politics, history, and social issues.

-

download mkhathazi songs


Download Zip ✒ ✒ ✒ https://jinyurl.com/2uNTgJ



-

One of the most popular and talented maskandi artists in South Africa is Mkhathazi. He is a singer, songwriter, guitarist, and producer who has been making waves in the music industry since his debut album in 2010. He has won several awards, collaborated with other famous artists, and performed at various festivals and events. His songs are catchy, uplifting, and inspiring, blending traditional elements with modern influences.

-

If you want to enjoy the best of maskandi music, you should download Mkhathazi songs. Downloading his songs will allow you to listen to them anytime, anywhere, without any interruptions or ads. You will also be able to support his work and appreciate his artistry. In this article, we will tell you more about the history and culture of maskandi music, the biography and achievements of Mkhathazi, and the benefits and methods of downloading his songs.

-

The History and Culture of Maskandi Music

-

The origins and evolution of maskandi music

-

Maskandi music can be traced back to the early 20th century, when migrant workers from rural areas moved to urban centers in search of jobs. They brought with them their musical traditions, which they used to express their feelings and opinions. They also adapted their music to suit their new environment, incorporating influences from other genres such as jazz, blues, gospel, reggae, and hip hop.

-

download makhadzi sugar sugar feat mampintsha mp3
-download umkhathazi ngikhule kanzima official video
-download makhadzi murahu feat mr brown music video
-download makhadzi latest songs 2023
-download umkhathazi new album 2023
-download makhadzi and master kg songs
-download umkhathazi ft khuzani mp3
-download makhadzi red card official video
-download umkhathazi isiginci mp3
-download makhadzi ghanama feat prince benza video
-download umkhathazi amabunjwa mp3
-download makhadzi zwivhuya feat jon delinger video
-download umkhathazi ngiyabonga mp3
-download makhadzi magear feat mr brown audio
-download umkhathazi izingane zoma mp3
-download makhadzi makwapheni feat mr bow audio
-download umkhathazi ngiyamthanda mp3
-download makhadzi ngwago feat prince benza video
-download umkhathazi ngiyazifela mp3
-download makhadzi mayellowbone feat prince benza video
-download umkhathazi ngiyavuma mp3
-download makhadzi best hit music playlist 2023
-download umkhathazi best of maskandi 2023
-download makhadzi ft penny penny milandu bhe video
-download umkhathazi ft imfez emnyama mp3
-download makhadzi ft costatitch big flexa video
-download umkhathazi ft shwi nomtekhala mp3
-download makhadzi ft sdala b and paige ngiyazifela ngawe ep live performance video
-download umkhathazi ft kholeka mp3
-download makhadzi ft wanitwa mos and master kg dali nguwe video
-download umkhathazi ft dumi mkokstad mp3
-download makhadzi ft dj call me maxaka video
-download umkhathazi ft iphakade lami mp3
-download makhadzi ft mr bow va navela video
-download umkhathazi ft thokozani langa mp3
-download makhadzi ft wayawaya and team mosha video
-download umkhathazi ft mgqumeni mp3
-download makhadzi ft stimela and ntate stunna video
-download umkhathazi ft bhekumuzi luthuli mp3
-download makhadzi ft di boya limpopo and zanda zakuza video
-download umkhathazi ft khuzani indlamlenze mp3.

-

Maskandi music has evolved over the years, with different styles and subgenres emerging. Some of the most notable ones are isishameni (fast-paced and upbeat), isigekle (slow-paced and melodic), isibhaca (aggressive and confrontational), isitshikitsha (dance-oriented and rhythmic), and isigcino (solo-oriented and lyrical). Maskandi music has also diversified its audience, appealing to people from different backgrounds, ages, genders, and regions.

-

The characteristics and themes of maskandi music

-

Maskandi music is known for its distinctive sound and style. It usually features a lead singer who plays an acoustic guitar, accompanied by backing vocalists who sing in harmony or call-and-response. The singer often improvises lyrics based on current events or personal experiences. The lyrics are usually sung in Zulu or other indigenous languages, using proverbs, metaphors, idioms, and slang.

-

Maskandi music also covers a wide range of themes and messages. Some of the common ones are love, romance, family, friendship, religion, spirituality, culture, heritage, identity, politics, social issues, morality, humor, satire, competition, praise, criticism, advice, encouragement, motivation, inspiration, celebration, gratitude, respect, and pride.

-

The popularity and influence of maskandi music

The Biography and Achievements of Mkhathazi -

The early life and career of Mkhathazi

-

Mkhathazi, whose real name is Sipho Ngubane, was born in 1986 in Nquthu, a small town in northern KwaZulu-Natal. He grew up in a musical family, with his father being a maskandi singer and his mother a gospel singer. He started singing at a young age, joining his father's band and performing at weddings and ceremonies. He also learned to play the guitar, which became his signature instrument.

-

Mkhathazi moved to Durban in 2008 to pursue his music career. He recorded his first album, Uyisoka Lami, in 2010, which was well received by maskandi fans. He followed it up with several more albums, such as Uyabaleka (2012), Uthando Lwakho (2014), and Ngikhule Kanzima (2018). His songs are known for their catchy melodies, witty lyrics, and social commentary. He sings about love, culture, politics, religion, and everyday life.

-

The awards and recognition of Mkhathazi

-

Mkhathazi has won several awards and accolades for his music. He has been nominated for the South African Music Awards (SAMAs) four times, winning the Best Maskandi Album award in 2016 for his album Uthando Lwakho. He has also won the Eastern Cape Music Awards (ECMA) twice, in 2019 and 2020, for the Best Maskandi Artist category. He has also received recognition from the Maskandi Music Association of South Africa (MMASA), which honoured him with the Best Male Artist award in 2017.

-

Mkhathazi has also performed at various festivals and events, both locally and internationally. He has graced the stages of the Maskandi Music Festival, the Wozekhaya Expo and Maskandi Music Festival, the N3 Ubumbano Maskandi Fest, and the Ugu Maskandi Festival. He has also toured countries such as Botswana, Lesotho, Swaziland, Mozambique, Zimbabwe, and Namibia.

-

The collaborations and projects of Mkhathazi

-

Mkhathazi has collaborated with other famous artists from different genres, such as Mampintsha, Big Zulu, Khuzani, Ntencane, and Phuzekhemisi. He has also worked with producers such as DJ Tira, Prince Bulo, DJ Cndo, and DJ Bongz. He has featured on songs such as Sugar Sugar by Makhadzi, Ngikhule Kanzima by Umkhathazi, Murahu by Makhadzi, and many more.

-

Mkhathazi is also involved in various projects that aim to promote maskandi music and culture. He is the founder of the Mkhathazi Foundation, which supports young and upcoming maskandi artists. He is also the ambassador of the Maskandi Music Academy, which offers training and mentorship to aspiring maskandi musicians. He is also a member of the Maskandi Music Council, which advocates for the rights and interests of maskandi artists.

The Benefits and Methods of Downloading Mkhathazi Songs

-

The advantages of downloading Mkhathazi songs

-

Downloading Mkhathazi songs has many benefits for you as a listener and a fan. Here are some of them:

- -

The legal and ethical issues of downloading Mkhathazi songs

-

Downloading Mkhathazi songs is not illegal, as long as you do it from authorized sources and for personal use only. However, you should be aware of the legal and ethical issues that may arise from downloading his songs. Here are some of them:

- -

The best websites and apps for downloading Mkhathazi songs

There are many websites and apps that offer you the option to download Mkhathazi songs legally and safely. Some of the best ones are:

- - - - - - - -
Website/AppFeatures
iTunes- Offers high-quality downloads of Mkhathazi songs and albums
- Allows you to sync your downloads with your Apple devices
- Provides you with information and reviews of Mkhathazi music
Spotify- Allows you to stream and download Mkhathazi songs and albums
- Lets you create your own playlists and discover new music
- Gives you access to exclusive content and podcasts from Mkhathazi
Amazon Music- Enables you to buy and download Mkhathazi songs and albums
- Lets you store your downloads on the cloud and access them from any device
- Offers you recommendations and deals on Mkhathazi music
YouTube Music- Allows you to watch and download Mkhathazi videos and songs
- Lets you enjoy ad-free music and offline playback
- Gives you access to live performances and interviews from Mkhathazi
SoundCloud- Enables you to listen and download Mkhathazi songs and tracks
- Lets you follow Mkhathazi and interact with him and other fans
- Offers you the opportunity to discover new music from emerging artists
-

Conclusion

-

Mkhathazi is one of the most popular and talented maskandi artists in South Africa. His music is a blend of traditional Zulu culture and modern influences. He has won several awards, collaborated with other famous artists, and performed at various festivals and events. Downloading his songs will allow you to enjoy his music anytime, anywhere, without any interruptions or ads. You will also be able to support his work and appreciate his artistry. However, you should also be aware of the legal and ethical issues that may arise from downloading his songs. You should only download his songs from authorized sources and for personal use only. You should also respect his intellectual property rights and artistic integrity.

-

If you want to enjoy the best of maskandi music, you should download Mkhathazi songs. You will not regret it. He is a true legend of maskandi music. To download his songs, you can visit any of the websites or apps mentioned above. You can also follow him on social media platforms such as Facebook, Twitter, Instagram, or YouTube. You can also visit his official website for more information about him and his music.

-

Frequently Asked Questions (FAQs)

Here are some of the frequently asked questions (FAQs) about Mkhathazi and his music:

-
    -
  1. What is the meaning of Mkhathazi?
    -Mkhathazi is a Zulu name that means "the one who makes people happy". It is also a nickname that was given to him by his fans, who appreciate his music and personality.
  2. -
  3. How many albums has Mkhathazi released?
    -Mkhathazi has released seven albums so far. They are Uyisoka Lami (2010), Uyabaleka (2012), Uthando Lwakho (2014), Ngikhule Kanzima (2018), Umkhathazi (2020), Uyisoka Lami Reloaded (2021), and Ngikhule Kanzima Reloaded (2021).
  4. -
  5. What are some of the most popular songs by Mkhathazi?
    -Some of the most popular songs by Mkhathazi are Ngikhule Kanzima, Uthando Lwakho, Sugar Sugar, Murahu, Uyisoka Lami, Uyabaleka, Ngizokubamba, and Ngiyamthanda.
  6. -
  7. Who are some of the maskandi artists that Mkhathazi admires or looks up to?
    -Some of the maskandi artists that Mkhathazi admires or looks up to are Phuzekhemisi, Ihashi Elimhlophe, Mgqumeni, Shwi Nomtekhala, Khuzani, and Ntencane.
  8. -
  9. How can I contact Mkhathazi for bookings or inquiries?
    -You can contact Mkhathazi for bookings or inquiries through his email address, mkhathazimusic@gmail.com, or his phone number, +27 76 123 4567. You can also send him a message on his social media platforms or his official website.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Real Cricket GO Mod APK and Enjoy Unlimited Money and Features.md b/spaces/1phancelerku/anime-remove-background/Download Real Cricket GO Mod APK and Enjoy Unlimited Money and Features.md deleted file mode 100644 index 3671fbf6ca111b0f38619b82a25667a091bda34b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Real Cricket GO Mod APK and Enjoy Unlimited Money and Features.md +++ /dev/null @@ -1,131 +0,0 @@ -
-

Real Cricket Go APK Mod: A Review

-

If you are a fan of cricket and want to enjoy a realistic and thrilling game on your mobile device, then you might want to check out Real Cricket Go. This is a game that lets you experience the excitement of international cricket tournaments under 45 MB. And if you want to unlock more features and have more fun, then you can try the Real Cricket Go APK Mod, which is a modified version of the game that gives you access to unlimited resources and premium content. In this article, we will review the Real Cricket Go APK Mod and tell you everything you need to know about it.

-

real cricket go apk mod


Download Ziphttps://jinyurl.com/2uNTaO



-

What is Real Cricket Go?

-

Real Cricket Go is a 3D cricket game developed by Nautilus Mobile, the same company that created the popular Real Cricket series. The game is designed to be lightweight and fast, so you can play it on any device without worrying about storage space or performance issues. The game features realistic graphics, animations, and sounds, as well as various game modes and tournaments that will keep you hooked for hours. You can choose from different teams, players, stadiums, and conditions, and customize your gameplay according to your preferences.

-

Features of Real Cricket Go

-

Some of the features that make Real Cricket Go stand out from other cricket games are:

- -

How to download and install Real Cricket Go APK Mod?

-

If you want to enjoy more features and benefits than the original version of Real Cricket Go, then you can download and install the Real Cricket Go APK Mod. This is a modified version of the game that gives you unlimited coins, tickets, unlocked players, stadiums, kits, modes, tournaments, and more. You can also remove ads and enjoy a smoother gameplay with this mod. To download and install the Real Cricket Go APK Mod, follow these steps:

-
    -
  1. Go to [this link](^1^) and download the Real Cricket Go APK Mod file on your device.
  2. -
  3. Enable unknown sources on your device by going to Settings > Security > Unknown Sources.
  4. -
  5. Locate the downloaded file on your device and tap on it to install it.
  6. -
  7. Wait for the installation to complete and then launch the game from your app drawer or home screen.
  8. -
  9. Enjoy playing Real Cricket Go APK Mod with unlimited resources and premium content.
  10. -
-

Why use Real Cricket Go APK Mod?

-

You might be wondering why you should use the Real Cricket Go APK Mod instead of the original version of the game. Well, there are several reasons why using this mod can enhance your gaming experience and make it more enjoyable. Here are some of them:

-

real cricket go mod apk unlimited money
-real cricket go mod apk download latest version
-real cricket go mod apk hack
-real cricket go mod apk android 1
-real cricket go mod apk revdl
-real cricket go mod apk rexdl
-real cricket go mod apk free download
-real cricket go mod apk 2023
-real cricket go mod apk all unlocked
-real cricket go mod apk unlimited coins and gems
-real cricket go mod apk offline
-real cricket go mod apk no ads
-real cricket go mod apk unlimited tickets
-real cricket go mod apk obb
-real cricket go mod apk pure
-real cricket go mod apk unlimited everything
-real cricket go mod apk latest update
-real cricket go mod apk for pc
-real cricket go mod apk online
-real cricket go mod apk 0.2.4
-real cricket go hack version download
-real cricket go hack apk download
-real cricket go hack game download
-real cricket go hack unlimited money
-real cricket go hack app download
-real cricket go hack version 2023
-real cricket go hack version free download
-real cricket go hack version latest
-real cricket go hack version online
-real cricket go hack version offline
-download game real cricket go mod apk
-download game real cricket go hack version
-download game real cricket go unlimited money
-download game real cricket go latest version
-download game real cricket go offline mode
-download game real cricket go for android
-download game real cricket go for pc
-download game real cricket go online mode
-download game real cricket go 2023 version
-download game real cricket go all unlocked

-

Benefits of Real Cricket Go APK Mod

-

Some of the benefits that you can get from using the Real Cricket Go APK Mod are:

- -

Risks of Real Cricket Go APK Mod

-

However, using the Real Cricket Go APK Mod also comes with some risks that you should be aware of before downloading and installing it. Some of the risks that you might face are:

- -

Therefore, you should use the Real Cricket Go APK Mod at your own risk and discretion. We are not responsible for any damage or loss that may occur as a result of using this mod.

-

How to play Real Cricket Go APK Mod?

-

Playing Real Cricket Go APK Mod is not very different from playing the original version of the game. You just need to follow the same steps and rules as you would in the normal game. However, you will have more options and freedom to customize your gameplay and enjoy more features and content. Here are some tips on how to play Real Cricket Go APK Mod:

-

Game modes and tournaments

-

Real Cricket Go APK Mod offers you various game modes and tournaments that you can choose from depending on your mood and preference. Some of the game modes and tournaments that you can play are:

- - - - - - - -
Game ModeDescription
Quick MatchThis is a simple and fast mode that lets you play a single match against any team of your choice. You can select the overs, difficulty level, stadium, and weather conditions.
World CupThis is a mode that lets you participate in the most prestigious cricket tournament in the world. You can select your team and compete with other teams in group stages and knockout rounds until you reach the final.
Champions CupThis is a mode that lets you play in a mini version of the World Cup with eight teams. You can select your team and play in two groups of four teams each, followed by semi-finals and final.
Super OverThis is a mode that lets you play a thrilling tie-breaker match with only one over per side. You can select your team and try to score as many runs as possible or defend a target against your opponent.
Test MatchThis is a mode that lets you play a classic five-day cricket match with two innings per side. You can select your team and try to score more runs than your opponent or bowl them out within the allotted time.
-

Tips and tricks

-

Some of the tips and tricks that can help you improve your skills and performance in Real Cricket Go APK Mod are:

- -

Conclusion

-

Real Cricket Go APK Mod is a fun and exciting cricket game that lets you enjoy a realistic and thrilling cricket experience on your mobile device. You can play various game modes and tournaments, customize your team and gameplay, unlock unlimited resources and premium content, remove ads, and enjoy a smoother gameplay with this mod. However, you should also be aware of the risks involved in using this mod, such as getting banned, losing data, or exposing your device to malware. Therefore, you should use this mod at your own risk and discretion. We hope this article has given you a comprehensive review of Real Cricket Go APK Mod and helped you decide whether to download it or not

If you have any questions or doubts about Real Cricket Go APK Mod, you can check out the FAQs section below. We have answered some of the most common and frequently asked questions about this mod. If you have any other questions, feel free to leave a comment or contact us.

-

FAQs

-

Here are some of the FAQs about Real Cricket Go APK Mod:

-
    -
  1. Is Real Cricket Go APK Mod safe to use?
  2. -

    Real Cricket Go APK Mod is not an official version of the game and is not endorsed by the developers or Google Play Store. Therefore, it is not guaranteed to be safe or secure to use. You might face some risks such as getting banned, losing data, or exposing your device to malware. You should use this mod at your own risk and discretion.

    -
  3. How to update Real Cricket Go APK Mod?
  4. -

    Real Cricket Go APK Mod is not updated automatically by the game or the Play Store. You will have to manually download and install the latest version of the mod from a reliable source. However, you might lose your progress or data if the mod is not compatible with the game updates. You should backup your data before updating the mod.

    -
  5. How to uninstall Real Cricket Go APK Mod?
  6. -

    If you want to uninstall Real Cricket Go APK Mod, you can follow these steps:

    - -
  7. Can I play Real Cricket Go APK Mod online or offline?
  8. -

    You can play Real Cricket Go APK Mod both online and offline. However, you might not be able to access some features or content that require an internet connection. You might also face some issues or errors while playing online with other players who are using the original version of the game.

    -
  9. Can I play Real Cricket Go APK Mod with friends?
  10. -

    You can play Real Cricket Go APK Mod with friends who are also using the same mod. You can invite them to join your team or challenge them to a match. However, you might not be able to play with friends who are using the original version of the game or a different mod.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py deleted file mode 100644 index 28e47d2f4169bf2b7bbde296ed8ba08594873872..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py +++ /dev/null @@ -1,631 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import paddle -import paddle.nn as nn - -################################################################################ -# Code for the text transformer model -################################################################################ -from paddlenlp.transformers import ( - PretrainedModel, - PretrainedTokenizer, - register_base_model, -) -from paddlenlp.transformers.model_outputs import ( - BaseModelOutputWithPoolingAndCrossAttentions, -) - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel, UNet2DModel, VQModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import deprecate, logging - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class LDMBertPretrainedModel(PretrainedModel): - pretrained_init_configuration = {} - pretrained_resource_files_map = {} - base_model_prefix = "ldmbert" - - def init_weights(self, layer): - if isinstance(layer, (nn.Linear, nn.Embedding)): - layer.weight.set_value( - paddle.normal( - mean=0.0, - std=self.initializer_range - if hasattr(self, "initializer_range") - else self.ldmbert.config["initializer_range"], - shape=layer.weight.shape, - ) - ) - - -class LDMBertEmbeddings(nn.Layer): - def __init__(self, vocab_size, hidden_size=768, hidden_dropout_prob=0.0, max_position_embeddings=512): - super().__init__() - self.word_embeddings = nn.Embedding(vocab_size, hidden_size) - self.position_embeddings = nn.Embedding(max_position_embeddings, hidden_size) - self.dropout = nn.Dropout(hidden_dropout_prob) - - def forward(self, input_ids, position_ids=None): - if position_ids is None: - ones = paddle.ones_like(input_ids, dtype="int64") - seq_length = paddle.cumsum(ones, axis=-1) - position_ids = seq_length - ones - position_ids.stop_gradient = True - - input_embedings = self.word_embeddings(input_ids) - position_embeddings = self.position_embeddings(position_ids) - - embeddings = input_embedings + position_embeddings - embeddings = self.dropout(embeddings) - return embeddings - - -class TransformerEncoderLayer(nn.TransformerEncoderLayer): - def __init__( - self, - d_model, - nhead, - dim_feedforward, - dropout=0.1, - activation="gelu", - attn_dropout=None, - act_dropout=None, - normalize_before=False, - weight_attr=None, - bias_attr=None, - head_dim=64, - ): - super().__init__( - d_model, - nhead, - dim_feedforward, - dropout, - activation, - attn_dropout, - act_dropout, - normalize_before, - weight_attr, - bias_attr, - ) - # update self attn - self.self_attn = LDMBertAttention( - d_model, head_dim, nhead, dropout=attn_dropout, weight_attr=weight_attr, bias_attr=False - ) - - -@register_base_model -class LDMBertModel(LDMBertPretrainedModel): - _no_split_modules = [] - - def __init__( - self, - vocab_size=30522, - max_position_embeddings=77, - encoder_layers=32, - encoder_ffn_dim=5120, - encoder_attention_heads=8, - head_dim=64, - activation_function="gelu", - d_model=1280, - dropout=0.0, - attention_dropout=0.0, - activation_dropout=0.0, - init_std=0.02, - pad_token_id=0, - **kwargs - ): - super().__init__() - self.pad_token_id = pad_token_id - self.initializer_range = init_std - self.embeddings = LDMBertEmbeddings(vocab_size, d_model, dropout, max_position_embeddings) - encoder_layer = TransformerEncoderLayer( - d_model, - encoder_attention_heads, - encoder_ffn_dim, - dropout=dropout, - activation=activation_function, - attn_dropout=attention_dropout, - act_dropout=activation_dropout, - normalize_before=True, - head_dim=head_dim, - ) - - self.encoder = nn.TransformerEncoder(encoder_layer, encoder_layers) - self.final_layer_norm = nn.LayerNorm(d_model) - self.apply(self.init_weights) - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def forward( - self, - input_ids, - position_ids=None, - attention_mask=None, - output_hidden_states=False, - output_attentions=False, - return_dict=False, - ): - - if attention_mask is not None and attention_mask.ndim == 2: - # attention_mask [batch_size, sequence_length] -> [batch_size, 1, 1, sequence_length] - attention_mask = attention_mask.unsqueeze(axis=[1, 2]).astype(paddle.get_default_dtype()) - attention_mask = (1.0 - attention_mask) * -1e4 - - embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids) - - encoder_outputs = self.encoder( - embedding_output, - src_mask=attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - if isinstance(encoder_outputs, type(embedding_output)): - sequence_output = self.final_layer_norm(encoder_outputs) - return (sequence_output,) - else: - sequence_output = encoder_outputs[0] - sequence_output = self.final_layer_norm(sequence_output) - if not return_dict: - return (sequence_output,) + encoder_outputs[1:] - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -class LDMBertAttention(nn.MultiHeadAttention): - def __init__( - self, - embed_dim, - head_dim, - num_heads, - dropout=0.0, - kdim=None, - vdim=None, - need_weights=False, - weight_attr=None, - bias_attr=None, - ): - super().__init__(embed_dim, num_heads, dropout, kdim, vdim, need_weights, weight_attr, bias_attr) - assert embed_dim > 0, "Expected embed_dim to be greater than 0, " "but recieved {}".format(embed_dim) - assert num_heads > 0, "Expected num_heads to be greater than 0, " "but recieved {}".format(num_heads) - - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.num_heads = num_heads - self.dropout = dropout - self.need_weights = need_weights - - self.head_dim = head_dim - self.inner_dim = head_dim * num_heads - self.scaling = self.head_dim**-0.5 - - self.q_proj = nn.Linear(embed_dim, self.inner_dim, weight_attr, bias_attr=bias_attr) - self.k_proj = nn.Linear(self.kdim, self.inner_dim, weight_attr, bias_attr=bias_attr) - self.v_proj = nn.Linear(self.vdim, self.inner_dim, weight_attr, bias_attr=bias_attr) - self.out_proj = nn.Linear(self.inner_dim, embed_dim, weight_attr) - - -class LDMBertModelForMaskedLM(LDMBertPretrainedModel): - def __init__(self, ldmbert): - super().__init__() - self.ldmbert = ldmbert - self.to_logits = nn.Linear(ldmbert.config["hidden_size"], ldmbert.config["vocab_size"]) - self.apply(self.init_weights) - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - outputs = self.ldmbert( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - return outputs - - -class LDMTextToImagePipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.) - - Parameters: - vqvae ([`VQModel`]): - Vector-quantized (VQ) Model to encode and decode images to and from latent representations. - bert ([`LDMBertModel`]): - Text-encoder model based on [BERT](https://paddlenlp.readthedocs.io/zh/latest/source/paddlenlp.transformers.bert.modeling.html#paddlenlp.transformers.bert.modeling.BertModel) architecture. - tokenizer (`paddlenlp.transformers.BertTokenizer`): - Tokenizer of class - [BertTokenizer](https://paddlenlp.readthedocs.io/zh/latest/source/paddlenlp.transformers.bert.tokenizer.html#paddlenlp.transformers.bert.tokenizer.BertTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] - or [`DPMSolverMultistepScheduler`]. - """ - - def __init__( - self, - vqvae: Union[VQModel, AutoencoderKL], - bert: PretrainedModel, - tokenizer: PretrainedTokenizer, - unet: Union[UNet2DModel, UNet2DConditionModel], - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - ): - super().__init__() - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - self.register_modules(vqvae=vqvae, bert=bert, tokenizer=tokenizer, unet=unet, scheduler=scheduler) - self.vae_scale_factor = 2 ** (len(self.vqvae.config.block_out_channels) - 1) - - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pd", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pd").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not paddle.equal_all( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because LDMBert can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - text_embeddings = self.bert(text_input_ids) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.tile([1, num_images_per_prompt, 1]) - text_embeddings = text_embeddings.reshape([bs_embed * num_images_per_prompt, seq_len, -1]) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pd", - ) - - uncond_embeddings = self.bert(uncond_input.input_ids) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.tile([1, num_images_per_prompt, 1]) - uncond_embeddings = uncond_embeddings.reshape([batch_size * num_images_per_prompt, seq_len, -1]) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = paddle.concat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vqvae.decode(latents).sample - image = (image / 2 + 0.5).clip(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.transpose([0, 2, 3, 1]).cast("float32").numpy() - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, prompt, height, width, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, generator, latents=None): - shape = [batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor] - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - if isinstance(generator, list): - shape = [ - 1, - ] + shape[1:] - latents = [paddle.randn(shape, generator=generator[i], dtype=dtype) for i in range(batch_size)] - latents = paddle.concat(latents, axis=0) - else: - latents = paddle.randn(shape, generator=generator, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @paddle.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: int = 256, - width: int = 256, - num_inference_steps: int = 50, - guidance_scale: float = 1.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - latents: Optional[paddle.Tensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None, - callback_steps: Optional[int] = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to 256: - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 256: - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 1.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`paddle.Generator`, *optional*): - One or a list of paddle generator(s) to make generation deterministic. - latents (`paddle.Tensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - text_embeddings.dtype, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta_mlm.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta_mlm.py deleted file mode 100644 index eb5ee529dbb41ccb8a8c506fcbb2c24078893a80..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta_mlm.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -import torch -import torch.nn.functional as F -from torch import nn - -from text_to_speech.modules.tts.syntaspeech.syntaspeech import SyntaSpeech -from tasks.tts.ps_adv_mlm import PortaSpeechAdvMLMTask -from text_to_speech.utils.commons.hparams import hparams - - -class SyntaSpeechMLMTask(PortaSpeechAdvMLMTask): - def build_tts_model(self): - ph_dict_size = len(self.token_encoder) - word_dict_size = len(self.word_encoder) - self.model = SyntaSpeech(ph_dict_size, word_dict_size, hparams) - - self.gen_params = [p for p in self.model.parameters() if p.requires_grad] - self.dp_params = [p for k, p in self.model.named_parameters() if (('dur_predictor' in k) and p.requires_grad)] - self.gen_params_except_dp = [p for k, p in self.model.named_parameters() if (('dur_predictor' not in k) and p.requires_grad)] - self.bert_params = [p for k, p in self.model.named_parameters() if (('bert' in k) and p.requires_grad)] - self.gen_params_except_bert_and_dp = [p for k, p in self.model.named_parameters() if ('dur_predictor' not in k) and ('bert' not in k) and p.requires_grad ] - - self.use_bert = True if len(self.bert_params) > 0 else False - - \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/README.md b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/README.md deleted file mode 100644 index e7d761067afc34b6a7249faa187752b39ca24ffd..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# 2D Fashion Landmark Detection - -2D fashion landmark detection (also referred to as fashion alignment) aims to detect the key-point located at the functional region of clothes, for example the neckline and the cuff. - -## Data preparation - -Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_fashion_landmark.md) to prepare data. diff --git a/spaces/Abhilashvj/planogram-compliance/data/scripts/get_coco128.sh b/spaces/Abhilashvj/planogram-compliance/data/scripts/get_coco128.sh deleted file mode 100644 index e7ddce89b11552b9fa7d0d85c56fc4e3df2481cd..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/data/scripts/get_coco128.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) -# Example usage: bash data/scripts/get_coco128.sh -# parent -# ├── yolov5 -# └── datasets -# └── coco128 ← downloads here - -# Download/unzip images and labels -d='../datasets' # unzip directory -url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ -f='coco128.zip' # or 'coco128-segments.zip', 68 MB -echo 'Downloading' $url$f ' ...' -curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & - -wait # finish background tasks diff --git a/spaces/Abhilashvj/planogram-compliance/utils/google_app_engine/Dockerfile b/spaces/Abhilashvj/planogram-compliance/utils/google_app_engine/Dockerfile deleted file mode 100644 index 0155618f475104e9858b81470339558156c94e13..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/google_app_engine/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM gcr.io/google-appengine/python - -# Create a virtualenv for dependencies. This isolates these packages from -# system-level packages. -# Use -p python3 or -p python3.7 to select python version. Default is version 2. -RUN virtualenv /env -p python3 - -# Setting these environment variables are the same as running -# source /env/bin/activate. -ENV VIRTUAL_ENV /env -ENV PATH /env/bin:$PATH - -RUN apt-get update && apt-get install -y python-opencv - -# Copy the application's requirements.txt and run pip to install all -# dependencies into the virtualenv. -ADD requirements.txt /app/requirements.txt -RUN pip install -r /app/requirements.txt - -# Add the application source code. -ADD . /app - -# Run a WSGI server to serve the application. gunicorn must be declared as -# a dependency in requirements.txt. -CMD gunicorn -b :$PORT main:app diff --git a/spaces/Adapting/YouTube-Downloader/README.md b/spaces/Adapting/YouTube-Downloader/README.md deleted file mode 100644 index 179377f9d4902fb37224e56ff4c15ce6912e56d7..0000000000000000000000000000000000000000 --- a/spaces/Adapting/YouTube-Downloader/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YouTube Downloader -emoji: 🐢 -colorFrom: indigo -colorTo: purple -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Aditya9790/yolo7-object-tracking/utils/aws/__init__.py b/spaces/Aditya9790/yolo7-object-tracking/utils/aws/__init__.py deleted file mode 100644 index e9691f241edc06ad981b36ca27f7eff9e46686ed..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/utils/aws/__init__.py +++ /dev/null @@ -1 +0,0 @@ -#init \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Modal.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Modal.js deleted file mode 100644 index fea389a579f7f01fc8c1589bfe29d1bd9e623b0e..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Modal.js +++ /dev/null @@ -1,29 +0,0 @@ -import IsFunction from '../../../../plugins/utils/object/IsFunction.js'; -import ModalMethods from '../../basesizer/ModalMethods.js'; - -var Modal = function (config, onClose) { - if (IsFunction(config)) { - onClose = config; - config = undefined; - } - - if (config === undefined) { - config = {}; - } - - var zeroButtonMode = (this.buttonMode === 0); - - if (!config.hasOwnProperty('anyTouchClose')) { - config.anyTouchClose = zeroButtonMode; - } - - if (!config.hasOwnProperty('manualClose')) { - config.manualClose = !zeroButtonMode; - } - - ModalMethods.modal.call(this, config, onClose); - - return this; -} - -export default Modal; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/NinePatch.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/NinePatch.js deleted file mode 100644 index 443941eee736eb66576c0854a84db6dee5f36a59..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/NinePatch.js +++ /dev/null @@ -1,2 +0,0 @@ -import NinePatch from '../../../plugins/ninepatch2.js' -export default NinePatch; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AddChildMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AddChildMethods.js deleted file mode 100644 index b585dfb671e1b82c9660549cf858b6a3b8cbdf80..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AddChildMethods.js +++ /dev/null @@ -1,170 +0,0 @@ -import AddChild from '../basesizer/utils/AddChild.js'; -import GetBoundsConfig from '../utils/GetBoundsConfig.js'; -import ALIGNMODE from '../utils/AlignConst.js'; -import Space from '../space/Space.js'; -import { GetDisplayWidth, GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js'; -import GetNearestChildIndex from './GetNearestChildIndex.js'; - -const IsPlainObject = Phaser.Utils.Objects.IsPlainObject; -const GetValue = Phaser.Utils.Objects.GetValue; -const ALIGN_CENTER = Phaser.Display.Align.CENTER; -const PROPORTIONMODE = { - min: 0, - full: -1, -} - -var Add = function ( - gameObject, - proportion, align, paddingConfig, expand, - childKey, index, - minWidth, minHeight, - fitRatio, -) { - - AddChild.call(this, gameObject); - - var isRexSpace = gameObject.isRexSpace; - var proportionType = typeof (proportion); - if (proportion === null) { - return this; - } else if (proportionType === 'number') { - - } else if (proportionType === 'string') { - proportion = PROPORTIONMODE[proportion]; - } else if (IsPlainObject(proportion)) { - var config = proportion; - proportion = GetValue(config, 'proportion', undefined); - align = GetValue(config, 'align', ALIGN_CENTER); - paddingConfig = GetValue(config, 'padding', 0); - expand = GetValue(config, 'expand', false); - childKey = GetValue(config, 'key', undefined); - index = GetValue(config, 'index', undefined); - - if (!gameObject.isRexSizer) { - minWidth = GetValue(config, 'minWidth', undefined); - minHeight = GetValue(config, 'minHeight', undefined); - } - - fitRatio = GetValue(config, 'fitRatio', 0); // width/height - } - - if (typeof (align) === 'string') { - align = ALIGNMODE[align]; - } - - if (proportion === undefined) { - proportion = (isRexSpace) ? 1 : 0; - } - if (align === undefined) { - align = ALIGN_CENTER; - } - if (paddingConfig === undefined) { - paddingConfig = 0; - } - if (expand === undefined) { - expand = false; - } - - if (minWidth === undefined) { - if (isRexSpace) { - minWidth = 0; - } else if (!gameObject.isRexSizer) { - minWidth = gameObject._minWidth; - } - } - if (minHeight === undefined) { - if (isRexSpace) { - minHeight = 0; - } else if (!gameObject.isRexSizer) { - minHeight = gameObject._minHeight; - } - } - - if (fitRatio === undefined) { - fitRatio = 0; - } - - var config = this.getSizerConfig(gameObject); - config.proportion = proportion; - config.align = align; - config.padding = GetBoundsConfig(paddingConfig); - config.expand = expand; - config.fitRatio = (proportion === 0) ? fitRatio : 0; - - if ((index === undefined) || (index >= this.sizerChildren.length)) { - this.sizerChildren.push(gameObject); - } else { - this.sizerChildren.splice(index, 0, gameObject); - } - - if (!gameObject.isRexSizer) { // Expand normal game object - if (proportion > 0) { - if (this.orientation === 0) { // x - // minWidth is still undefined, uses current display width - gameObject.minWidth = (minWidth === undefined) ? GetDisplayWidth(gameObject) : minWidth; - } else { - // minHeight is still undefined, uses current display height - gameObject.minHeight = (minHeight === undefined) ? GetDisplayHeight(gameObject) : minHeight; - } - } - if (expand) { - if (this.orientation === 0) { // x - // Might have minHeight value, or still undefined - gameObject.minHeight = minHeight; - } else { - // Might have minWidth value, or still undefined - gameObject.minWidth = minWidth; - } - } - } - - if (childKey !== undefined) { - this.addChildrenMap(childKey, gameObject) - } - - return this; -}; - -export default { - add: Add, // sizer.add could be override - - addSpace(proportion) { - this.insertSpace(undefined, proportion); - return this; - }, - - insertSpace(index, proportion) { - if (proportion === undefined) { - proportion = 1; - } - Add.call(this, new Space(this.scene), - { - proportion: proportion, - minWidth: 0, - minHeight: 0, - index: index - } - ); - // No problem if sizer.add is override - return this; - }, - - insert(index, gameObject, proportion, align, paddingConfig, expand, childKey, minSize) { - if (IsPlainObject(proportion)) { - proportion.index = index; - } - - Add.call(this, gameObject, proportion, align, paddingConfig, expand, childKey, index, minSize); - // No problem if sizer.add is override - return this; - }, - - insertAtPosition(x, y, gameObject, proportion, align, paddingConfig, expand, childKey, minSize) { - var index = GetNearestChildIndex.call(this, x, y); - if (index === -1) { - index = undefined; - } - this.insert(index, gameObject, proportion, align, paddingConfig, expand, childKey, minSize); - return this; - } -} \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Boolean.pm b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Boolean.pm deleted file mode 100644 index 38be6a3817b3b3b5632f4ee6bd3bba7397af567e..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Boolean.pm +++ /dev/null @@ -1,27 +0,0 @@ -=head1 NAME - -JSON::PP::Boolean - dummy module providing JSON::PP::Boolean - -=head1 SYNOPSIS - - # do not "use" yourself - -=head1 DESCRIPTION - -This module exists only to provide overload resolution for Storable -and similar modules. See L for more info about this class. - -=cut - -use JSON::backportPP (); -use strict; - -1; - -=head1 AUTHOR - -This idea is from L written by -Marc Lehmann - -=cut - diff --git a/spaces/AlekseyKorshuk/model-evaluation/tabs/playground.py b/spaces/AlekseyKorshuk/model-evaluation/tabs/playground.py deleted file mode 100644 index 0e9d28da8135b37f9c1e620eae5095e2e282a2b8..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/model-evaluation/tabs/playground.py +++ /dev/null @@ -1,123 +0,0 @@ -import gradio as gr -from conversation import Conversation - - -def get_tab_playground(download_bot_config, get_bot_profile, model_mapping): - gr.Markdown(""" - # 🎢 Playground 🎢 - ## Rules - * Chat with any model you would like with any bot from the Chai app. - * Click “Clear” to start a new conversation. - """) - default_bot_id = "_bot_e21de304-6151-4a04-b025-4c553ae8cbca" - bot_config = download_bot_config(default_bot_id) - user_state = gr.State( - bot_config - ) - with gr.Row(): - bot_id = gr.Textbox(label="Chai bot ID", value=default_bot_id, interactive=True) - reload_bot_button = gr.Button("Reload bot") - - bot_profile = gr.HTML(get_bot_profile(bot_config)) - with gr.Accordion("Bot config:", open=False): - bot_config_text = gr.Markdown(f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}") - - first_message = (None, bot_config["firstMessage"]) - chatbot = gr.Chatbot([first_message]) - - msg = gr.Textbox(label="Message", value="Hi there!") - with gr.Row(): - send = gr.Button("Send") - regenerate = gr.Button("Regenerate") - clear = gr.Button("Clear") - values = list(model_mapping.keys()) - model_tag = gr.Dropdown(values, value=values[0], label="Model version") - model = model_mapping[model_tag.value] - - with gr.Accordion("Generation parameters", open=False): - temperature = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["temperature"], - interactive=True, label="Temperature") - repetition_penalty = gr.Slider(minimum=0.0, maximum=2.0, - value=model.generation_params["repetition_penalty"], - interactive=True, label="Repetition penalty") - max_new_tokens = gr.Slider(minimum=1, maximum=512, value=model.generation_params["max_new_tokens"], - interactive=True, label="Max new tokens") - top_k = gr.Slider(minimum=1, maximum=100, value=model.generation_params["top_k"], - interactive=True, label="Top-K") - top_p = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["top_p"], - interactive=True, label="Top-P") - - def respond(message, chat_history, user_state, model_tag, - temperature, repetition_penalty, max_new_tokens, top_k, top_p): - custom_generation_params = { - 'temperature': temperature, - 'repetition_penalty': repetition_penalty, - 'max_new_tokens': max_new_tokens, - 'top_k': top_k, - 'top_p': top_p, - } - conv = Conversation(user_state) - conv.set_chat_history(chat_history) - conv.add_user_message(message) - model = model_mapping[model_tag] - bot_message = model.generate_response(conv, custom_generation_params) - chat_history.append( - (message, bot_message) - ) - return "", chat_history - - def clear_chat(chat_history, user_state): - chat_history = [(None, user_state["firstMessage"])] - return chat_history - - def regenerate_response(chat_history, user_state, model_tag, - temperature, repetition_penalty, max_new_tokens, top_k, top_p): - custom_generation_params = { - 'temperature': temperature, - 'repetition_penalty': repetition_penalty, - 'max_new_tokens': max_new_tokens, - 'top_k': top_k, - 'top_p': top_p, - } - last_row = chat_history.pop(-1) - chat_history.append((last_row[0], None)) - model = model_mapping[model_tag] - conv = Conversation(user_state) - conv.set_chat_history(chat_history) - bot_message = model.generate_response(conv, custom_generation_params) - chat_history[-1] = (last_row[0], bot_message) - return chat_history - - def reload_bot(bot_id, bot_profile, chat_history): - bot_config = download_bot_config(bot_id) - bot_profile = get_bot_profile(bot_config) - return bot_profile, [(None, bot_config[ - "firstMessage"])], bot_config, f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}" - - def get_generation_args(model_tag): - model = model_mapping[model_tag] - return ( - model.generation_params["temperature"], - model.generation_params["repetition_penalty"], - model.generation_params["max_new_tokens"], - model.generation_params["top_k"], - model.generation_params["top_p"], - ) - - model_tag.change(get_generation_args, [model_tag], [temperature, repetition_penalty, max_new_tokens, top_k, - top_p], queue=False) - send.click(respond, - [msg, chatbot, user_state, model_tag, temperature, repetition_penalty, max_new_tokens, top_k, - top_p], [msg, chatbot], - queue=False) - msg.submit(respond, - [msg, chatbot, user_state, model_tag, temperature, repetition_penalty, max_new_tokens, top_k, - top_p], [msg, chatbot], - queue=False) - clear.click(clear_chat, [chatbot, user_state], [chatbot], queue=False) - regenerate.click(regenerate_response, - [chatbot, user_state, model_tag, temperature, repetition_penalty, max_new_tokens, top_k, - top_p], [chatbot], queue=False) - reload_bot_button.click(reload_bot, [bot_id, bot_profile, chatbot], - [bot_profile, chatbot, user_state, bot_config_text], - queue=False) diff --git a/spaces/AlgoveraAI/algovera_squad_active_passive_model/README.md b/spaces/AlgoveraAI/algovera_squad_active_passive_model/README.md deleted file mode 100644 index 298fb5fb15199ae2bdc74f7cfd3b47a70f2deb5c..0000000000000000000000000000000000000000 --- a/spaces/AlgoveraAI/algovera_squad_active_passive_model/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Algovera_squad_active_passive_model -emoji: 🐢 -colorFrom: blue -colorTo: purple -sdk: streamlit -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/autoencoder_kl.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/autoencoder_kl.py deleted file mode 100644 index 2390d2bc58261c76a38cd18dc48dbd7fb59a4d58..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/autoencoder_kl.py +++ /dev/null @@ -1,417 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Dict, Optional, Tuple, Union - -import torch -import torch.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..loaders import FromOriginalVAEMixin -from ..utils import BaseOutput, apply_forward_hook -from .attention_processor import AttentionProcessor, AttnProcessor -from .modeling_utils import ModelMixin -from .vae import Decoder, DecoderOutput, DiagonalGaussianDistribution, Encoder - - -@dataclass -class AutoencoderKLOutput(BaseOutput): - """ - Output of AutoencoderKL encoding method. - - Args: - latent_dist (`DiagonalGaussianDistribution`): - Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`. - `DiagonalGaussianDistribution` allows for sampling latents from the distribution. - """ - - latent_dist: "DiagonalGaussianDistribution" - - -class AutoencoderKL(ModelMixin, ConfigMixin, FromOriginalVAEMixin): - r""" - A VAE model with KL loss for encoding images into latents and decoding latent representations into images. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented - for all models (such as downloading or saving). - - Parameters: - in_channels (int, *optional*, defaults to 3): Number of channels in the input image. - out_channels (int, *optional*, defaults to 3): Number of channels in the output. - down_block_types (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`): - Tuple of downsample block types. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`): - Tuple of upsample block types. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`): - Tuple of block output channels. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - latent_channels (`int`, *optional*, defaults to 4): Number of channels in the latent space. - sample_size (`int`, *optional*, defaults to `32`): Sample input size. - scaling_factor (`float`, *optional*, defaults to 0.18215): - The component-wise standard deviation of the trained latent space computed using the first batch of the - training set. This is used to scale the latent space to have unit variance when training the diffusion - model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the - diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1 - / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image - Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper. - force_upcast (`bool`, *optional*, default to `True`): - If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE - can be fine-tuned / trained to a lower range without loosing too much precision in which case - `force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str] = ("DownEncoderBlock2D",), - up_block_types: Tuple[str] = ("UpDecoderBlock2D",), - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - act_fn: str = "silu", - latent_channels: int = 4, - norm_num_groups: int = 32, - sample_size: int = 32, - scaling_factor: float = 0.18215, - force_upcast: float = True, - ): - super().__init__() - - # pass init params to Encoder - self.encoder = Encoder( - in_channels=in_channels, - out_channels=latent_channels, - down_block_types=down_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - double_z=True, - ) - - # pass init params to Decoder - self.decoder = Decoder( - in_channels=latent_channels, - out_channels=out_channels, - up_block_types=up_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - norm_num_groups=norm_num_groups, - act_fn=act_fn, - ) - - self.quant_conv = nn.Conv2d(2 * latent_channels, 2 * latent_channels, 1) - self.post_quant_conv = nn.Conv2d(latent_channels, latent_channels, 1) - - self.use_slicing = False - self.use_tiling = False - - # only relevant if vae tiling is enabled - self.tile_sample_min_size = self.config.sample_size - sample_size = ( - self.config.sample_size[0] - if isinstance(self.config.sample_size, (list, tuple)) - else self.config.sample_size - ) - self.tile_latent_min_size = int(sample_size / (2 ** (len(self.config.block_out_channels) - 1))) - self.tile_overlap_factor = 0.25 - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (Encoder, Decoder)): - module.gradient_checkpointing = value - - def enable_tiling(self, use_tiling: bool = True): - r""" - Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to - compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow - processing larger images. - """ - self.use_tiling = use_tiling - - def disable_tiling(self): - r""" - Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing - decoding in one step. - """ - self.enable_tiling(False) - - def enable_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.use_slicing = True - - def disable_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing - decoding in one step. - """ - self.use_slicing = False - - @property - # Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.attn_processors - def attn_processors(self) -> Dict[str, AttentionProcessor]: - r""" - Returns: - `dict` of attention processors: A dictionary containing all attention processors used in the model with - indexed by its weight name. - """ - # set recursively - processors = {} - - def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): - if hasattr(module, "set_processor"): - processors[f"{name}.processor"] = module.processor - - for sub_name, child in module.named_children(): - fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) - - return processors - - for name, module in self.named_children(): - fn_recursive_add_processors(name, module, processors) - - return processors - - # Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.set_attn_processor - def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): - r""" - Sets the attention processor to use to compute attention. - - Parameters: - processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): - The instantiated processor class or a dictionary of processor classes that will be set as the processor - for **all** `Attention` layers. - - If `processor` is a dict, the key needs to define the path to the corresponding cross attention - processor. This is strongly recommended when setting trainable attention processors. - - """ - count = len(self.attn_processors.keys()) - - if isinstance(processor, dict) and len(processor) != count: - raise ValueError( - f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" - f" number of attention layers: {count}. Please make sure to pass {count} processor classes." - ) - - def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): - if hasattr(module, "set_processor"): - if not isinstance(processor, dict): - module.set_processor(processor) - else: - module.set_processor(processor.pop(f"{name}.processor")) - - for sub_name, child in module.named_children(): - fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) - - for name, module in self.named_children(): - fn_recursive_attn_processor(name, module, processor) - - # Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.set_default_attn_processor - def set_default_attn_processor(self): - """ - Disables custom attention processors and sets the default attention implementation. - """ - self.set_attn_processor(AttnProcessor()) - - @apply_forward_hook - def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput: - if self.use_tiling and (x.shape[-1] > self.tile_sample_min_size or x.shape[-2] > self.tile_sample_min_size): - return self.tiled_encode(x, return_dict=return_dict) - - if self.use_slicing and x.shape[0] > 1: - encoded_slices = [self.encoder(x_slice) for x_slice in x.split(1)] - h = torch.cat(encoded_slices) - else: - h = self.encoder(x) - - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - - if not return_dict: - return (posterior,) - - return AutoencoderKLOutput(latent_dist=posterior) - - def _decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - if self.use_tiling and (z.shape[-1] > self.tile_latent_min_size or z.shape[-2] > self.tile_latent_min_size): - return self.tiled_decode(z, return_dict=return_dict) - - z = self.post_quant_conv(z) - dec = self.decoder(z) - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - @apply_forward_hook - def decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - if self.use_slicing and z.shape[0] > 1: - decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)] - decoded = torch.cat(decoded_slices) - else: - decoded = self._decode(z).sample - - if not return_dict: - return (decoded,) - - return DecoderOutput(sample=decoded) - - def blend_v(self, a, b, blend_extent): - blend_extent = min(a.shape[2], b.shape[2], blend_extent) - for y in range(blend_extent): - b[:, :, y, :] = a[:, :, -blend_extent + y, :] * (1 - y / blend_extent) + b[:, :, y, :] * (y / blend_extent) - return b - - def blend_h(self, a, b, blend_extent): - blend_extent = min(a.shape[3], b.shape[3], blend_extent) - for x in range(blend_extent): - b[:, :, :, x] = a[:, :, :, -blend_extent + x] * (1 - x / blend_extent) + b[:, :, :, x] * (x / blend_extent) - return b - - def tiled_encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput: - r"""Encode a batch of images using a tiled encoder. - - When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several - steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is - different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the - tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the - output, but they should be much less noticeable. - - Args: - x (`torch.FloatTensor`): Input batch of images. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.autoencoder_kl.AutoencoderKLOutput`] instead of a plain tuple. - - Returns: - [`~models.autoencoder_kl.AutoencoderKLOutput`] or `tuple`: - If return_dict is True, a [`~models.autoencoder_kl.AutoencoderKLOutput`] is returned, otherwise a plain - `tuple` is returned. - """ - overlap_size = int(self.tile_sample_min_size * (1 - self.tile_overlap_factor)) - blend_extent = int(self.tile_latent_min_size * self.tile_overlap_factor) - row_limit = self.tile_latent_min_size - blend_extent - - # Split the image into 512x512 tiles and encode them separately. - rows = [] - for i in range(0, x.shape[2], overlap_size): - row = [] - for j in range(0, x.shape[3], overlap_size): - tile = x[:, :, i : i + self.tile_sample_min_size, j : j + self.tile_sample_min_size] - tile = self.encoder(tile) - tile = self.quant_conv(tile) - row.append(tile) - rows.append(row) - result_rows = [] - for i, row in enumerate(rows): - result_row = [] - for j, tile in enumerate(row): - # blend the above tile and the left tile - # to the current tile and add the current tile to the result row - if i > 0: - tile = self.blend_v(rows[i - 1][j], tile, blend_extent) - if j > 0: - tile = self.blend_h(row[j - 1], tile, blend_extent) - result_row.append(tile[:, :, :row_limit, :row_limit]) - result_rows.append(torch.cat(result_row, dim=3)) - - moments = torch.cat(result_rows, dim=2) - posterior = DiagonalGaussianDistribution(moments) - - if not return_dict: - return (posterior,) - - return AutoencoderKLOutput(latent_dist=posterior) - - def tiled_decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - r""" - Decode a batch of images using a tiled decoder. - - Args: - z (`torch.FloatTensor`): Input batch of latent vectors. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.vae.DecoderOutput`] instead of a plain tuple. - - Returns: - [`~models.vae.DecoderOutput`] or `tuple`: - If return_dict is True, a [`~models.vae.DecoderOutput`] is returned, otherwise a plain `tuple` is - returned. - """ - overlap_size = int(self.tile_latent_min_size * (1 - self.tile_overlap_factor)) - blend_extent = int(self.tile_sample_min_size * self.tile_overlap_factor) - row_limit = self.tile_sample_min_size - blend_extent - - # Split z into overlapping 64x64 tiles and decode them separately. - # The tiles have an overlap to avoid seams between tiles. - rows = [] - for i in range(0, z.shape[2], overlap_size): - row = [] - for j in range(0, z.shape[3], overlap_size): - tile = z[:, :, i : i + self.tile_latent_min_size, j : j + self.tile_latent_min_size] - tile = self.post_quant_conv(tile) - decoded = self.decoder(tile) - row.append(decoded) - rows.append(row) - result_rows = [] - for i, row in enumerate(rows): - result_row = [] - for j, tile in enumerate(row): - # blend the above tile and the left tile - # to the current tile and add the current tile to the result row - if i > 0: - tile = self.blend_v(rows[i - 1][j], tile, blend_extent) - if j > 0: - tile = self.blend_h(row[j - 1], tile, blend_extent) - result_row.append(tile[:, :, :row_limit, :row_limit]) - result_rows.append(torch.cat(result_row, dim=3)) - - dec = torch.cat(result_rows, dim=2) - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - def forward( - self, - sample: torch.FloatTensor, - sample_posterior: bool = False, - return_dict: bool = True, - generator: Optional[torch.Generator] = None, - ) -> Union[DecoderOutput, torch.FloatTensor]: - r""" - Args: - sample (`torch.FloatTensor`): Input sample. - sample_posterior (`bool`, *optional*, defaults to `False`): - Whether to sample from the posterior. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`DecoderOutput`] instead of a plain tuple. - """ - x = sample - posterior = self.encode(x).latent_dist - if sample_posterior: - z = posterior.sample(generator=generator) - else: - z = posterior.mode() - dec = self.decode(z).sample - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/modeling_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/modeling_utils.py deleted file mode 100644 index 1fa96514c0a9e39b9321550f6d85a8e11b0deb36..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/modeling_utils.py +++ /dev/null @@ -1,980 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import itertools -import os -import re -from functools import partial -from typing import Any, Callable, List, Optional, Tuple, Union - -import torch -from torch import Tensor, device, nn - -from .. import __version__ -from ..utils import ( - CONFIG_NAME, - DIFFUSERS_CACHE, - FLAX_WEIGHTS_NAME, - HF_HUB_OFFLINE, - SAFETENSORS_WEIGHTS_NAME, - WEIGHTS_NAME, - _add_variant, - _get_model_file, - deprecate, - is_accelerate_available, - is_safetensors_available, - is_torch_version, - logging, -) - - -logger = logging.get_logger(__name__) - - -if is_torch_version(">=", "1.9.0"): - _LOW_CPU_MEM_USAGE_DEFAULT = True -else: - _LOW_CPU_MEM_USAGE_DEFAULT = False - - -if is_accelerate_available(): - import accelerate - from accelerate.utils import set_module_tensor_to_device - from accelerate.utils.versions import is_torch_version - -if is_safetensors_available(): - import safetensors - - -def get_parameter_device(parameter: torch.nn.Module): - try: - parameters_and_buffers = itertools.chain(parameter.parameters(), parameter.buffers()) - return next(parameters_and_buffers).device - except StopIteration: - # For torch.nn.DataParallel compatibility in PyTorch 1.5 - - def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]: - tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)] - return tuples - - gen = parameter._named_members(get_members_fn=find_tensor_attributes) - first_tuple = next(gen) - return first_tuple[1].device - - -def get_parameter_dtype(parameter: torch.nn.Module): - try: - params = tuple(parameter.parameters()) - if len(params) > 0: - return params[0].dtype - - buffers = tuple(parameter.buffers()) - if len(buffers) > 0: - return buffers[0].dtype - - except StopIteration: - # For torch.nn.DataParallel compatibility in PyTorch 1.5 - - def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]: - tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)] - return tuples - - gen = parameter._named_members(get_members_fn=find_tensor_attributes) - first_tuple = next(gen) - return first_tuple[1].dtype - - -def load_state_dict(checkpoint_file: Union[str, os.PathLike], variant: Optional[str] = None): - """ - Reads a checkpoint file, returning properly formatted errors if they arise. - """ - try: - if os.path.basename(checkpoint_file) == _add_variant(WEIGHTS_NAME, variant): - return torch.load(checkpoint_file, map_location="cpu") - else: - return safetensors.torch.load_file(checkpoint_file, device="cpu") - except Exception as e: - try: - with open(checkpoint_file) as f: - if f.read().startswith("version"): - raise OSError( - "You seem to have cloned a repository without having git-lfs installed. Please install " - "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " - "you cloned." - ) - else: - raise ValueError( - f"Unable to locate the file {checkpoint_file} which is necessary to load this pretrained " - "model. Make sure you have saved the model properly." - ) from e - except (UnicodeDecodeError, ValueError): - raise OSError( - f"Unable to load weights from checkpoint file for '{checkpoint_file}' " - f"at '{checkpoint_file}'. " - "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True." - ) - - -def _load_state_dict_into_model(model_to_load, state_dict): - # Convert old format to new format if needed from a PyTorch state_dict - # copy state_dict so _load_from_state_dict can modify it - state_dict = state_dict.copy() - error_msgs = [] - - # PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants - # so we need to apply the function recursively. - def load(module: torch.nn.Module, prefix=""): - args = (state_dict, prefix, {}, True, [], [], error_msgs) - module._load_from_state_dict(*args) - - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + ".") - - load(model_to_load) - - return error_msgs - - -class ModelMixin(torch.nn.Module): - r""" - Base class for all models. - - [`ModelMixin`] takes care of storing the model configuration and provides methods for loading, downloading and - saving models. - - - **config_name** ([`str`]) -- Filename to save a model to when calling [`~models.ModelMixin.save_pretrained`]. - """ - config_name = CONFIG_NAME - _automatically_saved_args = ["_diffusers_version", "_class_name", "_name_or_path"] - _supports_gradient_checkpointing = False - _keys_to_ignore_on_load_unexpected = None - - def __init__(self): - super().__init__() - - def __getattr__(self, name: str) -> Any: - """The only reason we overwrite `getattr` here is to gracefully deprecate accessing - config attributes directly. See https://github.com/huggingface/diffusers/pull/3129 We need to overwrite - __getattr__ here in addition so that we don't trigger `torch.nn.Module`'s __getattr__': - https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module - """ - - is_in_config = "_internal_dict" in self.__dict__ and hasattr(self.__dict__["_internal_dict"], name) - is_attribute = name in self.__dict__ - - if is_in_config and not is_attribute: - deprecation_message = f"Accessing config attribute `{name}` directly via '{type(self).__name__}' object attribute is deprecated. Please access '{name}' over '{type(self).__name__}'s config object instead, e.g. 'unet.config.{name}'." - deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False, stacklevel=3) - return self._internal_dict[name] - - # call PyTorch's https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module - return super().__getattr__(name) - - @property - def is_gradient_checkpointing(self) -> bool: - """ - Whether gradient checkpointing is activated for this model or not. - """ - return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules()) - - def enable_gradient_checkpointing(self): - """ - Activates gradient checkpointing for the current model (may be referred to as *activation checkpointing* or - *checkpoint activations* in other frameworks). - """ - if not self._supports_gradient_checkpointing: - raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.") - self.apply(partial(self._set_gradient_checkpointing, value=True)) - - def disable_gradient_checkpointing(self): - """ - Deactivates gradient checkpointing for the current model (may be referred to as *activation checkpointing* or - *checkpoint activations* in other frameworks). - """ - if self._supports_gradient_checkpointing: - self.apply(partial(self._set_gradient_checkpointing, value=False)) - - def set_use_memory_efficient_attention_xformers( - self, valid: bool, attention_op: Optional[Callable] = None - ) -> None: - # Recursively walk through all the children. - # Any children which exposes the set_use_memory_efficient_attention_xformers method - # gets the message - def fn_recursive_set_mem_eff(module: torch.nn.Module): - if hasattr(module, "set_use_memory_efficient_attention_xformers"): - module.set_use_memory_efficient_attention_xformers(valid, attention_op) - - for child in module.children(): - fn_recursive_set_mem_eff(child) - - for module in self.children(): - if isinstance(module, torch.nn.Module): - fn_recursive_set_mem_eff(module) - - def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None): - r""" - Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). - - When this option is enabled, you should observe lower GPU memory usage and a potential speed up during - inference. Speed up during training is not guaranteed. - - - - ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes - precedent. - - - - Parameters: - attention_op (`Callable`, *optional*): - Override the default `None` operator for use as `op` argument to the - [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention) - function of xFormers. - - Examples: - - ```py - >>> import torch - >>> from diffusers import UNet2DConditionModel - >>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp - - >>> model = UNet2DConditionModel.from_pretrained( - ... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 - ... ) - >>> model = model.to("cuda") - >>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) - ``` - """ - self.set_use_memory_efficient_attention_xformers(True, attention_op) - - def disable_xformers_memory_efficient_attention(self): - r""" - Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). - """ - self.set_use_memory_efficient_attention_xformers(False) - - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - is_main_process: bool = True, - save_function: Callable = None, - safe_serialization: bool = False, - variant: Optional[str] = None, - ): - """ - Save a model and its configuration file to a directory so that it can be reloaded using the - [`~models.ModelMixin.from_pretrained`] class method. - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to save a model and its configuration file to. Will be created if it doesn't exist. - is_main_process (`bool`, *optional*, defaults to `True`): - Whether the process calling this is the main process or not. Useful during distributed training and you - need to call this function on all processes. In this case, set `is_main_process=True` only on the main - process to avoid race conditions. - save_function (`Callable`): - The function to use to save the state dictionary. Useful during distributed training when you need to - replace `torch.save` with another method. Can be configured with the environment variable - `DIFFUSERS_SAVE_MODE`. - safe_serialization (`bool`, *optional*, defaults to `False`): - Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. - variant (`str`, *optional*): - If specified, weights are saved in the format `pytorch_model..bin`. - """ - if safe_serialization and not is_safetensors_available(): - raise ImportError("`safe_serialization` requires the `safetensors library: `pip install safetensors`.") - - if os.path.isfile(save_directory): - logger.error(f"Provided path ({save_directory}) should be a directory, not a file") - return - - os.makedirs(save_directory, exist_ok=True) - - model_to_save = self - - # Attach architecture to the config - # Save the config - if is_main_process: - model_to_save.save_config(save_directory) - - # Save the model - state_dict = model_to_save.state_dict() - - weights_name = SAFETENSORS_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME - weights_name = _add_variant(weights_name, variant) - - # Save the model - if safe_serialization: - safetensors.torch.save_file( - state_dict, os.path.join(save_directory, weights_name), metadata={"format": "pt"} - ) - else: - torch.save(state_dict, os.path.join(save_directory, weights_name)) - - logger.info(f"Model weights saved in {os.path.join(save_directory, weights_name)}") - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - r""" - Instantiate a pretrained PyTorch model from a pretrained model configuration. - - The model is set in evaluation mode - `model.eval()` - by default, and dropout modules are deactivated. To - train the model, set it back in training mode with `model.train()`. - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on - the Hub. - - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved - with [`~ModelMixin.save_pretrained`]. - - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory where a downloaded pretrained model configuration is cached if the standard cache - is not used. - torch_dtype (`str` or `torch.dtype`, *optional*): - Override the default `torch.dtype` and load the model with another dtype. If `"auto"` is passed, the - dtype is automatically derived from the model's weights. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to resume downloading the model weights and configuration files. If set to `False`, any - incompletely downloaded files are deleted. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info (`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether to only load local model weights and configuration files or not. If set to `True`, the model - won't be downloaded from the Hub. - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from - `diffusers-cli login` (stored in `~/.huggingface`) is used. - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier - allowed by Git. - from_flax (`bool`, *optional*, defaults to `False`): - Load the model weights from a Flax checkpoint save file. - subfolder (`str`, *optional*, defaults to `""`): - The subfolder location of a model file within a larger model repository on the Hub or locally. - mirror (`str`, *optional*): - Mirror source to resolve accessibility issues if you're downloading a model in China. We do not - guarantee the timeliness or safety of the source, and you should refer to the mirror site for more - information. - device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): - A map that specifies where each submodule should go. It doesn't need to be defined for each - parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the - same device. - - Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For - more information about each option see [designing a device - map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). - max_memory (`Dict`, *optional*): - A dictionary device identifier for the maximum memory. Will default to the maximum memory available for - each GPU and the available CPU RAM if unset. - offload_folder (`str` or `os.PathLike`, *optional*): - The path to offload weights if `device_map` contains the value `"disk"`. - offload_state_dict (`bool`, *optional*): - If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if - the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True` - when there is some disk offload. - low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): - Speed up model loading only loading the pretrained weights and not initializing the weights. This also - tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. - Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this - argument to `True` will raise an error. - variant (`str`, *optional*): - Load weights from a specified `variant` filename such as `"fp16"` or `"ema"`. This is ignored when - loading `from_flax`. - use_safetensors (`bool`, *optional*, defaults to `None`): - If set to `None`, the `safetensors` weights are downloaded if they're available **and** if the - `safetensors` library is installed. If set to `True`, the model is forcibly loaded from `safetensors` - weights. If set to `False`, `safetensors` weights are not loaded. - - - - To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with - `huggingface-cli login`. You can also activate the special - ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a - firewalled environment. - - - - Example: - - ```py - from diffusers import UNet2DConditionModel - - unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") - ``` - - If you get the error message below, you need to finetune the weights for your downstream task: - - ```bash - Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - - conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated - You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. - ``` - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - ignore_mismatched_sizes = kwargs.pop("ignore_mismatched_sizes", False) - force_download = kwargs.pop("force_download", False) - from_flax = kwargs.pop("from_flax", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - output_loading_info = kwargs.pop("output_loading_info", False) - local_files_only = kwargs.pop("local_files_only", HF_HUB_OFFLINE) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - torch_dtype = kwargs.pop("torch_dtype", None) - subfolder = kwargs.pop("subfolder", None) - device_map = kwargs.pop("device_map", None) - max_memory = kwargs.pop("max_memory", None) - offload_folder = kwargs.pop("offload_folder", None) - offload_state_dict = kwargs.pop("offload_state_dict", False) - low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) - variant = kwargs.pop("variant", None) - use_safetensors = kwargs.pop("use_safetensors", None) - - if use_safetensors and not is_safetensors_available(): - raise ValueError( - "`use_safetensors`=True but safetensors is not installed. Please install safetensors with `pip install safetensors" - ) - - allow_pickle = False - if use_safetensors is None: - use_safetensors = is_safetensors_available() - allow_pickle = True - - if low_cpu_mem_usage and not is_accelerate_available(): - low_cpu_mem_usage = False - logger.warning( - "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the" - " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install" - " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip" - " install accelerate\n```\n." - ) - - if device_map is not None and not is_accelerate_available(): - raise NotImplementedError( - "Loading and dispatching requires `accelerate`. Please make sure to install accelerate or set" - " `device_map=None`. You can install accelerate with `pip install accelerate`." - ) - - # Check if we can handle device_map and dispatching the weights - if device_map is not None and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `device_map=None`." - ) - - if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `low_cpu_mem_usage=False`." - ) - - if low_cpu_mem_usage is False and device_map is not None: - raise ValueError( - f"You cannot set `low_cpu_mem_usage` to `False` while using device_map={device_map} for loading and" - " dispatching. Please make sure to set `low_cpu_mem_usage=True`." - ) - - # Load config if we don't provide a configuration - config_path = pretrained_model_name_or_path - - user_agent = { - "diffusers": __version__, - "file_type": "model", - "framework": "pytorch", - } - - # load config - config, unused_kwargs, commit_hash = cls.load_config( - config_path, - cache_dir=cache_dir, - return_unused_kwargs=True, - return_commit_hash=True, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - device_map=device_map, - max_memory=max_memory, - offload_folder=offload_folder, - offload_state_dict=offload_state_dict, - user_agent=user_agent, - **kwargs, - ) - - # load model - model_file = None - if from_flax: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=FLAX_WEIGHTS_NAME, - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - commit_hash=commit_hash, - ) - model = cls.from_config(config, **unused_kwargs) - - # Convert the weights - from .modeling_pytorch_flax_utils import load_flax_checkpoint_in_pytorch_model - - model = load_flax_checkpoint_in_pytorch_model(model, model_file) - else: - if use_safetensors: - try: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=_add_variant(SAFETENSORS_WEIGHTS_NAME, variant), - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - commit_hash=commit_hash, - ) - except IOError as e: - if not allow_pickle: - raise e - pass - if model_file is None: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=_add_variant(WEIGHTS_NAME, variant), - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - commit_hash=commit_hash, - ) - - if low_cpu_mem_usage: - # Instantiate model with empty weights - with accelerate.init_empty_weights(): - model = cls.from_config(config, **unused_kwargs) - - # if device_map is None, load the state dict and move the params from meta device to the cpu - if device_map is None: - param_device = "cpu" - state_dict = load_state_dict(model_file, variant=variant) - model._convert_deprecated_attention_blocks(state_dict) - # move the params from meta device to cpu - missing_keys = set(model.state_dict().keys()) - set(state_dict.keys()) - if len(missing_keys) > 0: - raise ValueError( - f"Cannot load {cls} from {pretrained_model_name_or_path} because the following keys are" - f" missing: \n {', '.join(missing_keys)}. \n Please make sure to pass" - " `low_cpu_mem_usage=False` and `device_map=None` if you want to randomly initialize" - " those weights or else make sure your checkpoint file is correct." - ) - unexpected_keys = [] - - empty_state_dict = model.state_dict() - for param_name, param in state_dict.items(): - accepts_dtype = "dtype" in set( - inspect.signature(set_module_tensor_to_device).parameters.keys() - ) - - if param_name not in empty_state_dict: - unexpected_keys.append(param_name) - continue - - if empty_state_dict[param_name].shape != param.shape: - raise ValueError( - f"Cannot load {pretrained_model_name_or_path} because {param_name} expected shape {empty_state_dict[param_name]}, but got {param.shape}. If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example." - ) - - if accepts_dtype: - set_module_tensor_to_device( - model, param_name, param_device, value=param, dtype=torch_dtype - ) - else: - set_module_tensor_to_device(model, param_name, param_device, value=param) - - if cls._keys_to_ignore_on_load_unexpected is not None: - for pat in cls._keys_to_ignore_on_load_unexpected: - unexpected_keys = [k for k in unexpected_keys if re.search(pat, k) is None] - - if len(unexpected_keys) > 0: - logger.warn( - f"Some weights of the model checkpoint were not used when initializing {cls.__name__}: \n {[', '.join(unexpected_keys)]}" - ) - - else: # else let accelerate handle loading and dispatching. - # Load weights and dispatch according to the device_map - # by default the device_map is None and the weights are loaded on the CPU - try: - accelerate.load_checkpoint_and_dispatch( - model, - model_file, - device_map, - max_memory=max_memory, - offload_folder=offload_folder, - offload_state_dict=offload_state_dict, - dtype=torch_dtype, - ) - except AttributeError as e: - # When using accelerate loading, we do not have the ability to load the state - # dict and rename the weight names manually. Additionally, accelerate skips - # torch loading conventions and directly writes into `module.{_buffers, _parameters}` - # (which look like they should be private variables?), so we can't use the standard hooks - # to rename parameters on load. We need to mimic the original weight names so the correct - # attributes are available. After we have loaded the weights, we convert the deprecated - # names to the new non-deprecated names. Then we _greatly encourage_ the user to convert - # the weights so we don't have to do this again. - - if "'Attention' object has no attribute" in str(e): - logger.warn( - f"Taking `{str(e)}` while using `accelerate.load_checkpoint_and_dispatch` to mean {pretrained_model_name_or_path}" - " was saved with deprecated attention block weight names. We will load it with the deprecated attention block" - " names and convert them on the fly to the new attention block format. Please re-save the model after this conversion," - " so we don't have to do the on the fly renaming in the future. If the model is from a hub checkpoint," - " please also re-upload it or open a PR on the original repository." - ) - model._temp_convert_self_to_deprecated_attention_blocks() - accelerate.load_checkpoint_and_dispatch( - model, - model_file, - device_map, - max_memory=max_memory, - offload_folder=offload_folder, - offload_state_dict=offload_state_dict, - dtype=torch_dtype, - ) - model._undo_temp_convert_self_to_deprecated_attention_blocks() - else: - raise e - - loading_info = { - "missing_keys": [], - "unexpected_keys": [], - "mismatched_keys": [], - "error_msgs": [], - } - else: - model = cls.from_config(config, **unused_kwargs) - - state_dict = load_state_dict(model_file, variant=variant) - model._convert_deprecated_attention_blocks(state_dict) - - model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model( - model, - state_dict, - model_file, - pretrained_model_name_or_path, - ignore_mismatched_sizes=ignore_mismatched_sizes, - ) - - loading_info = { - "missing_keys": missing_keys, - "unexpected_keys": unexpected_keys, - "mismatched_keys": mismatched_keys, - "error_msgs": error_msgs, - } - - if torch_dtype is not None and not isinstance(torch_dtype, torch.dtype): - raise ValueError( - f"{torch_dtype} needs to be of type `torch.dtype`, e.g. `torch.float16`, but is {type(torch_dtype)}." - ) - elif torch_dtype is not None: - model = model.to(torch_dtype) - - model.register_to_config(_name_or_path=pretrained_model_name_or_path) - - # Set model in evaluation mode to deactivate DropOut modules by default - model.eval() - if output_loading_info: - return model, loading_info - - return model - - @classmethod - def _load_pretrained_model( - cls, - model, - state_dict, - resolved_archive_file, - pretrained_model_name_or_path, - ignore_mismatched_sizes=False, - ): - # Retrieve missing & unexpected_keys - model_state_dict = model.state_dict() - loaded_keys = list(state_dict.keys()) - - expected_keys = list(model_state_dict.keys()) - - original_loaded_keys = loaded_keys - - missing_keys = list(set(expected_keys) - set(loaded_keys)) - unexpected_keys = list(set(loaded_keys) - set(expected_keys)) - - # Make sure we are able to load base models as well as derived models (with heads) - model_to_load = model - - def _find_mismatched_keys( - state_dict, - model_state_dict, - loaded_keys, - ignore_mismatched_sizes, - ): - mismatched_keys = [] - if ignore_mismatched_sizes: - for checkpoint_key in loaded_keys: - model_key = checkpoint_key - - if ( - model_key in model_state_dict - and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape - ): - mismatched_keys.append( - (checkpoint_key, state_dict[checkpoint_key].shape, model_state_dict[model_key].shape) - ) - del state_dict[checkpoint_key] - return mismatched_keys - - if state_dict is not None: - # Whole checkpoint - mismatched_keys = _find_mismatched_keys( - state_dict, - model_state_dict, - original_loaded_keys, - ignore_mismatched_sizes, - ) - error_msgs = _load_state_dict_into_model(model_to_load, state_dict) - - if len(error_msgs) > 0: - error_msg = "\n\t".join(error_msgs) - if "size mismatch" in error_msg: - error_msg += ( - "\n\tYou may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method." - ) - raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") - - if len(unexpected_keys) > 0: - logger.warning( - f"Some weights of the model checkpoint at {pretrained_model_name_or_path} were not used when" - f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are" - f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task" - " or with another architecture (e.g. initializing a BertForSequenceClassification model from a" - " BertForPreTraining model).\n- This IS NOT expected if you are initializing" - f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly" - " identical (initializing a BertForSequenceClassification model from a" - " BertForSequenceClassification model)." - ) - else: - logger.info(f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n") - if len(missing_keys) > 0: - logger.warning( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably" - " TRAIN this model on a down-stream task to be able to use it for predictions and inference." - ) - elif len(mismatched_keys) == 0: - logger.info( - f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at" - f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the" - f" checkpoint was trained on, you can already use {model.__class__.__name__} for predictions" - " without further training." - ) - if len(mismatched_keys) > 0: - mismatched_warning = "\n".join( - [ - f"- {key}: found shape {shape1} in the checkpoint and {shape2} in the model instantiated" - for key, shape1, shape2 in mismatched_keys - ] - ) - logger.warning( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized because the shapes did not" - f" match:\n{mismatched_warning}\nYou should probably TRAIN this model on a down-stream task to be" - " able to use it for predictions and inference." - ) - - return model, missing_keys, unexpected_keys, mismatched_keys, error_msgs - - @property - def device(self) -> device: - """ - `torch.device`: The device on which the module is (assuming that all the module parameters are on the same - device). - """ - return get_parameter_device(self) - - @property - def dtype(self) -> torch.dtype: - """ - `torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype). - """ - return get_parameter_dtype(self) - - def num_parameters(self, only_trainable: bool = False, exclude_embeddings: bool = False) -> int: - """ - Get number of (trainable or non-embedding) parameters in the module. - - Args: - only_trainable (`bool`, *optional*, defaults to `False`): - Whether or not to return only the number of trainable parameters. - exclude_embeddings (`bool`, *optional*, defaults to `False`): - Whether or not to return only the number of non-embedding parameters. - - Returns: - `int`: The number of parameters. - - Example: - - ```py - from diffusers import UNet2DConditionModel - - model_id = "runwayml/stable-diffusion-v1-5" - unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") - unet.num_parameters(only_trainable=True) - 859520964 - ``` - """ - - if exclude_embeddings: - embedding_param_names = [ - f"{name}.weight" - for name, module_type in self.named_modules() - if isinstance(module_type, torch.nn.Embedding) - ] - non_embedding_parameters = [ - parameter for name, parameter in self.named_parameters() if name not in embedding_param_names - ] - return sum(p.numel() for p in non_embedding_parameters if p.requires_grad or not only_trainable) - else: - return sum(p.numel() for p in self.parameters() if p.requires_grad or not only_trainable) - - def _convert_deprecated_attention_blocks(self, state_dict): - deprecated_attention_block_paths = [] - - def recursive_find_attn_block(name, module): - if hasattr(module, "_from_deprecated_attn_block") and module._from_deprecated_attn_block: - deprecated_attention_block_paths.append(name) - - for sub_name, sub_module in module.named_children(): - sub_name = sub_name if name == "" else f"{name}.{sub_name}" - recursive_find_attn_block(sub_name, sub_module) - - recursive_find_attn_block("", self) - - # NOTE: we have to check if the deprecated parameters are in the state dict - # because it is possible we are loading from a state dict that was already - # converted - - for path in deprecated_attention_block_paths: - # group_norm path stays the same - - # query -> to_q - if f"{path}.query.weight" in state_dict: - state_dict[f"{path}.to_q.weight"] = state_dict.pop(f"{path}.query.weight") - if f"{path}.query.bias" in state_dict: - state_dict[f"{path}.to_q.bias"] = state_dict.pop(f"{path}.query.bias") - - # key -> to_k - if f"{path}.key.weight" in state_dict: - state_dict[f"{path}.to_k.weight"] = state_dict.pop(f"{path}.key.weight") - if f"{path}.key.bias" in state_dict: - state_dict[f"{path}.to_k.bias"] = state_dict.pop(f"{path}.key.bias") - - # value -> to_v - if f"{path}.value.weight" in state_dict: - state_dict[f"{path}.to_v.weight"] = state_dict.pop(f"{path}.value.weight") - if f"{path}.value.bias" in state_dict: - state_dict[f"{path}.to_v.bias"] = state_dict.pop(f"{path}.value.bias") - - # proj_attn -> to_out.0 - if f"{path}.proj_attn.weight" in state_dict: - state_dict[f"{path}.to_out.0.weight"] = state_dict.pop(f"{path}.proj_attn.weight") - if f"{path}.proj_attn.bias" in state_dict: - state_dict[f"{path}.to_out.0.bias"] = state_dict.pop(f"{path}.proj_attn.bias") - - def _temp_convert_self_to_deprecated_attention_blocks(self): - deprecated_attention_block_modules = [] - - def recursive_find_attn_block(module): - if hasattr(module, "_from_deprecated_attn_block") and module._from_deprecated_attn_block: - deprecated_attention_block_modules.append(module) - - for sub_module in module.children(): - recursive_find_attn_block(sub_module) - - recursive_find_attn_block(self) - - for module in deprecated_attention_block_modules: - module.query = module.to_q - module.key = module.to_k - module.value = module.to_v - module.proj_attn = module.to_out[0] - - # We don't _have_ to delete the old attributes, but it's helpful to ensure - # that _all_ the weights are loaded into the new attributes and we're not - # making an incorrect assumption that this model should be converted when - # it really shouldn't be. - del module.to_q - del module.to_k - del module.to_v - del module.to_out - - def _undo_temp_convert_self_to_deprecated_attention_blocks(self): - deprecated_attention_block_modules = [] - - def recursive_find_attn_block(module): - if hasattr(module, "_from_deprecated_attn_block") and module._from_deprecated_attn_block: - deprecated_attention_block_modules.append(module) - - for sub_module in module.children(): - recursive_find_attn_block(sub_module) - - recursive_find_attn_block(self) - - for module in deprecated_attention_block_modules: - module.to_q = module.query - module.to_k = module.key - module.to_v = module.value - module.to_out = nn.ModuleList([module.proj_attn, nn.Dropout(module.dropout)]) - - del module.query - del module.key - del module.value - del module.proj_attn diff --git a/spaces/AnimaLab/bias-test-gpt-pairs/bloomberg_vis.py b/spaces/AnimaLab/bias-test-gpt-pairs/bloomberg_vis.py deleted file mode 100644 index 8406b4e72d32e9a3b606c3d0f2c60becfc8e9b8b..0000000000000000000000000000000000000000 --- a/spaces/AnimaLab/bias-test-gpt-pairs/bloomberg_vis.py +++ /dev/null @@ -1,85 +0,0 @@ -# def bloombergViz(val, numblocks=10, flip=False): -# percent = round(val * 100) -# percentStr = f"{percent}" -# filled = "
" -# unfilled = "
" -# numFilled = round((percent/100) * numblocks) -# numUnFilled = numblocks - numFilled -# if flip: -# return numFilled * unfilled + numUnFilled * filled; -# return numFilled * filled + numUnFilled * unfilled - -# def att_bloombergViz(att, val, numblocks, flip=False): -# viz = bloombergViz(val, numblocks, flip) -# attHTML = f"
{att}: {round(val*100)}%
{viz}

" -# return attHTML - -def bloombergViz(att, val, numblocks, score_templates_df, onRight=False, flip=False): - # percent = round(val * 100) - # percentStr = f"{percent}" - # filled = "
{}
" - # unfilled = "
{}
" - # numFilled = round((percent/100) * numblocks) - # numUnFilled = numblocks - numFilled - - leftColor = "#065b41" #"#555" - rightColor = "#35d4ac" #"#999" - if flip: - leftColor = "#35d4ac" #"#999" - rightColor = "#065b41" #"#555" - res = "" - spanClass = "tooltiptext_left" - if onRight: - spanClass = "tooltiptext_right" - dfy = score_templates_df.loc[(score_templates_df['att_term'] == att) & (score_templates_df['stereotyped_b'] == 'yes')] - dfn = score_templates_df.loc[(score_templates_df['att_term'] == att) & (score_templates_df['stereotyped_b'] == 'no')] - #print("dfy", dfy) - #print("dfn", dfn) - for i in range(len(dfy.index)): - #print("--GROUP IN BLOOMBERG--") - groups = dfy.iloc[i, dfy.columns.get_loc("groups_rel")].split("/") - gr_disp = groups[0]+"/"+groups[1] - grp_refs = list(dfy.iloc[i, dfy.columns.get_loc("grp_refs")]) - - template = dfy.iloc[i, dfy.columns.get_loc("template")] - for grp_pair in grp_refs: - #print(f"Item: {grp_pair[0]} - {grp_pair[1]}") - template = template.replace("[R]", grp_pair[0]+"/"+grp_pair[1], 1) - - # template based - disp = template.replace("[T]", f"[{gr_disp}]") #, 1) - - # sentence/alt-sentence based - #sentence = dfy.iloc[i, dfy.columns.get_loc("sentence")] - #alt_sentence = dfy.iloc[i, dfy.columns.get_loc("alt_sentence")] - #disp = f'"{sentence}"/"{alt_sentence}"' - - res += f"
{disp}
" - for i in range(len(dfn.index)): - groups = dfn.iloc[i, dfn.columns.get_loc("groups_rel")].split("/") - gr_disp = groups[0]+"/"+groups[1] - grp_refs = list(dfn.iloc[i, dfn.columns.get_loc("grp_refs")]) - - template = dfn.iloc[i, dfn.columns.get_loc("template")] - for grp_pair in grp_refs: - #print(f"Item: {grp_pair[0]} - {grp_pair[1]}") - template = template.replace("[R]", grp_pair[0]+"/"+grp_pair[1], 1) - - # template based - disp = template.replace("[T]", f"[{gr_disp}]")#, 1) - - # sentence/alt-sentence based - #sentence = dfn.iloc[i, dfn.columns.get_loc("sentence")] - #alt_sentence = dfn.iloc[i, dfn.columns.get_loc("alt_sentence")] - #disp = f'"{sentence}"/"{alt_sentence}"' - - res += f"
{disp}
" - return res - # if flip: - # return numFilled * unfilled + numUnFilled * filled; - # return numFilled * filled + numUnFilled * unfilled - -def att_bloombergViz(att, val, numblocks, score_templates_df, onRight=False, flip=False): - viz = bloombergViz(att, val, numblocks, score_templates_df, onRight, flip) - attHTML = f"
{att}: {round(val*100)}%
{viz}

" - return attHTML \ No newline at end of file diff --git a/spaces/AnonAndDesu/Desu_Proxy/greeting.md b/spaces/AnonAndDesu/Desu_Proxy/greeting.md deleted file mode 100644 index 9023b3392aff8af24b92ba4b9bd0bc25bdbbe0f2..0000000000000000000000000000000000000000 --- a/spaces/AnonAndDesu/Desu_Proxy/greeting.md +++ /dev/null @@ -1,3 +0,0 @@ -Only for desu lovers~ -https://rentry.co/Desu_Proxy - \ No newline at end of file diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/setup.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/setup.py deleted file mode 100644 index 204ee2cc0bc1742e92a071c0cf8dc23a44a4bd59..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/setup.py +++ /dev/null @@ -1,7 +0,0 @@ -from setuptools import setup - -setup( - name="guided-diffusion", - py_modules=["guided_diffusion"], - install_requires=["blobfile>=1.0.5", "torch", "tqdm"], -) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/__init__.py deleted file mode 100644 index 0f33124ed23fc6f27119a37bcb5ab004d3572be0..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activation import build_activation_layer -from .context_block import ContextBlock -from .conv import build_conv_layer -from .conv2d_adaptive_padding import Conv2dAdaptivePadding -from .conv_module import ConvModule -from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d -from .depthwise_separable_conv_module import DepthwiseSeparableConvModule -from .drop import Dropout, DropPath -from .generalized_attention import GeneralizedAttention -from .hsigmoid import HSigmoid -from .hswish import HSwish -from .non_local import NonLocal1d, NonLocal2d, NonLocal3d -from .norm import build_norm_layer, is_norm -from .padding import build_padding_layer -from .plugin import build_plugin_layer -from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS) -from .scale import Scale -from .swish import Swish -from .upsample import build_upsample_layer -from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d, - Linear, MaxPool2d, MaxPool3d) - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', 'build_padding_layer', 'build_upsample_layer', - 'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention', - 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS', - 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d', - 'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear', - 'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d', - 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath' -] diff --git a/spaces/AriaMei/TTSdemo/monotonic_align/setup.py b/spaces/AriaMei/TTSdemo/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/AriaMei/TTSdemo/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/ArkanDash/rvc-models-new/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/ArkanDash/rvc-models-new/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000 --- a/spaces/ArkanDash/rvc-models-new/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/config/__init__.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/config/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AsakuraMizu/moe-tts/text/__init__.py b/spaces/AsakuraMizu/moe-tts/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/AsakuraMizu/moe-tts/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/cmdoptions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/cmdoptions.py deleted file mode 100644 index 02ba60827933d6623cdf6b1417762fee47c1ab6f..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/cmdoptions.py +++ /dev/null @@ -1,1074 +0,0 @@ -""" -shared options and groups - -The principle here is to define options once, but *not* instantiate them -globally. One reason being that options with action='append' can carry state -between parses. pip parses general options twice internally, and shouldn't -pass on state. To be consistent, all options will follow this design. -""" - -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import importlib.util -import logging -import os -import textwrap -from functools import partial -from optparse import SUPPRESS_HELP, Option, OptionGroup, OptionParser, Values -from textwrap import dedent -from typing import Any, Callable, Dict, Optional, Tuple - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.cli.parser import ConfigOptionParser -from pip._internal.exceptions import CommandError -from pip._internal.locations import USER_CACHE_DIR, get_src_prefix -from pip._internal.models.format_control import FormatControl -from pip._internal.models.index import PyPI -from pip._internal.models.target_python import TargetPython -from pip._internal.utils.hashes import STRONG_HASHES -from pip._internal.utils.misc import strtobool - -logger = logging.getLogger(__name__) - - -def raise_option_error(parser: OptionParser, option: Option, msg: str) -> None: - """ - Raise an option parsing error using parser.error(). - - Args: - parser: an OptionParser instance. - option: an Option instance. - msg: the error text. - """ - msg = f"{option} error: {msg}" - msg = textwrap.fill(" ".join(msg.split())) - parser.error(msg) - - -def make_option_group(group: Dict[str, Any], parser: ConfigOptionParser) -> OptionGroup: - """ - Return an OptionGroup object - group -- assumed to be dict with 'name' and 'options' keys - parser -- an optparse Parser - """ - option_group = OptionGroup(parser, group["name"]) - for option in group["options"]: - option_group.add_option(option()) - return option_group - - -def check_dist_restriction(options: Values, check_target: bool = False) -> None: - """Function for determining if custom platform options are allowed. - - :param options: The OptionParser options. - :param check_target: Whether or not to check if --target is being used. - """ - dist_restriction_set = any( - [ - options.python_version, - options.platforms, - options.abis, - options.implementation, - ] - ) - - binary_only = FormatControl(set(), {":all:"}) - sdist_dependencies_allowed = ( - options.format_control != binary_only and not options.ignore_dependencies - ) - - # Installations or downloads using dist restrictions must not combine - # source distributions and dist-specific wheels, as they are not - # guaranteed to be locally compatible. - if dist_restriction_set and sdist_dependencies_allowed: - raise CommandError( - "When restricting platform and interpreter constraints using " - "--python-version, --platform, --abi, or --implementation, " - "either --no-deps must be set, or --only-binary=:all: must be " - "set and --no-binary must not be set (or must be set to " - ":none:)." - ) - - if check_target: - if dist_restriction_set and not options.target_dir: - raise CommandError( - "Can not use any platform or abi specific options unless " - "installing via '--target'" - ) - - -def _path_option_check(option: Option, opt: str, value: str) -> str: - return os.path.expanduser(value) - - -def _package_name_option_check(option: Option, opt: str, value: str) -> str: - return canonicalize_name(value) - - -class PipOption(Option): - TYPES = Option.TYPES + ("path", "package_name") - TYPE_CHECKER = Option.TYPE_CHECKER.copy() - TYPE_CHECKER["package_name"] = _package_name_option_check - TYPE_CHECKER["path"] = _path_option_check - - -########### -# options # -########### - -help_: Callable[..., Option] = partial( - Option, - "-h", - "--help", - dest="help", - action="help", - help="Show help.", -) - -debug_mode: Callable[..., Option] = partial( - Option, - "--debug", - dest="debug_mode", - action="store_true", - default=False, - help=( - "Let unhandled exceptions propagate outside the main subroutine, " - "instead of logging them to stderr." - ), -) - -isolated_mode: Callable[..., Option] = partial( - Option, - "--isolated", - dest="isolated_mode", - action="store_true", - default=False, - help=( - "Run pip in an isolated mode, ignoring environment variables and user " - "configuration." - ), -) - -require_virtualenv: Callable[..., Option] = partial( - Option, - "--require-virtualenv", - "--require-venv", - dest="require_venv", - action="store_true", - default=False, - help=( - "Allow pip to only run in a virtual environment; " - "exit with an error otherwise." - ), -) - -override_externally_managed: Callable[..., Option] = partial( - Option, - "--break-system-packages", - dest="override_externally_managed", - action="store_true", - help="Allow pip to modify an EXTERNALLY-MANAGED Python installation", -) - -python: Callable[..., Option] = partial( - Option, - "--python", - dest="python", - help="Run pip with the specified Python interpreter.", -) - -verbose: Callable[..., Option] = partial( - Option, - "-v", - "--verbose", - dest="verbose", - action="count", - default=0, - help="Give more output. Option is additive, and can be used up to 3 times.", -) - -no_color: Callable[..., Option] = partial( - Option, - "--no-color", - dest="no_color", - action="store_true", - default=False, - help="Suppress colored output.", -) - -version: Callable[..., Option] = partial( - Option, - "-V", - "--version", - dest="version", - action="store_true", - help="Show version and exit.", -) - -quiet: Callable[..., Option] = partial( - Option, - "-q", - "--quiet", - dest="quiet", - action="count", - default=0, - help=( - "Give less output. Option is additive, and can be used up to 3" - " times (corresponding to WARNING, ERROR, and CRITICAL logging" - " levels)." - ), -) - -progress_bar: Callable[..., Option] = partial( - Option, - "--progress-bar", - dest="progress_bar", - type="choice", - choices=["on", "off"], - default="on", - help="Specify whether the progress bar should be used [on, off] (default: on)", -) - -log: Callable[..., Option] = partial( - PipOption, - "--log", - "--log-file", - "--local-log", - dest="log", - metavar="path", - type="path", - help="Path to a verbose appending log.", -) - -no_input: Callable[..., Option] = partial( - Option, - # Don't ask for input - "--no-input", - dest="no_input", - action="store_true", - default=False, - help="Disable prompting for input.", -) - -keyring_provider: Callable[..., Option] = partial( - Option, - "--keyring-provider", - dest="keyring_provider", - choices=["auto", "disabled", "import", "subprocess"], - default="auto", - help=( - "Enable the credential lookup via the keyring library if user input is allowed." - " Specify which mechanism to use [disabled, import, subprocess]." - " (default: disabled)" - ), -) - -proxy: Callable[..., Option] = partial( - Option, - "--proxy", - dest="proxy", - type="str", - default="", - help="Specify a proxy in the form scheme://[user:passwd@]proxy.server:port.", -) - -retries: Callable[..., Option] = partial( - Option, - "--retries", - dest="retries", - type="int", - default=5, - help="Maximum number of retries each connection should attempt " - "(default %default times).", -) - -timeout: Callable[..., Option] = partial( - Option, - "--timeout", - "--default-timeout", - metavar="sec", - dest="timeout", - type="float", - default=15, - help="Set the socket timeout (default %default seconds).", -) - - -def exists_action() -> Option: - return Option( - # Option when path already exist - "--exists-action", - dest="exists_action", - type="choice", - choices=["s", "i", "w", "b", "a"], - default=[], - action="append", - metavar="action", - help="Default action when a path already exists: " - "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.", - ) - - -cert: Callable[..., Option] = partial( - PipOption, - "--cert", - dest="cert", - type="path", - metavar="path", - help=( - "Path to PEM-encoded CA certificate bundle. " - "If provided, overrides the default. " - "See 'SSL Certificate Verification' in pip documentation " - "for more information." - ), -) - -client_cert: Callable[..., Option] = partial( - PipOption, - "--client-cert", - dest="client_cert", - type="path", - default=None, - metavar="path", - help="Path to SSL client certificate, a single file containing the " - "private key and the certificate in PEM format.", -) - -index_url: Callable[..., Option] = partial( - Option, - "-i", - "--index-url", - "--pypi-url", - dest="index_url", - metavar="URL", - default=PyPI.simple_url, - help="Base URL of the Python Package Index (default %default). " - "This should point to a repository compliant with PEP 503 " - "(the simple repository API) or a local directory laid out " - "in the same format.", -) - - -def extra_index_url() -> Option: - return Option( - "--extra-index-url", - dest="extra_index_urls", - metavar="URL", - action="append", - default=[], - help="Extra URLs of package indexes to use in addition to " - "--index-url. Should follow the same rules as " - "--index-url.", - ) - - -no_index: Callable[..., Option] = partial( - Option, - "--no-index", - dest="no_index", - action="store_true", - default=False, - help="Ignore package index (only looking at --find-links URLs instead).", -) - - -def find_links() -> Option: - return Option( - "-f", - "--find-links", - dest="find_links", - action="append", - default=[], - metavar="url", - help="If a URL or path to an html file, then parse for links to " - "archives such as sdist (.tar.gz) or wheel (.whl) files. " - "If a local path or file:// URL that's a directory, " - "then look for archives in the directory listing. " - "Links to VCS project URLs are not supported.", - ) - - -def trusted_host() -> Option: - return Option( - "--trusted-host", - dest="trusted_hosts", - action="append", - metavar="HOSTNAME", - default=[], - help="Mark this host or host:port pair as trusted, even though it " - "does not have valid or any HTTPS.", - ) - - -def constraints() -> Option: - return Option( - "-c", - "--constraint", - dest="constraints", - action="append", - default=[], - metavar="file", - help="Constrain versions using the given constraints file. " - "This option can be used multiple times.", - ) - - -def requirements() -> Option: - return Option( - "-r", - "--requirement", - dest="requirements", - action="append", - default=[], - metavar="file", - help="Install from the given requirements file. " - "This option can be used multiple times.", - ) - - -def editable() -> Option: - return Option( - "-e", - "--editable", - dest="editables", - action="append", - default=[], - metavar="path/url", - help=( - "Install a project in editable mode (i.e. setuptools " - '"develop mode") from a local project path or a VCS url.' - ), - ) - - -def _handle_src(option: Option, opt_str: str, value: str, parser: OptionParser) -> None: - value = os.path.abspath(value) - setattr(parser.values, option.dest, value) - - -src: Callable[..., Option] = partial( - PipOption, - "--src", - "--source", - "--source-dir", - "--source-directory", - dest="src_dir", - type="path", - metavar="dir", - default=get_src_prefix(), - action="callback", - callback=_handle_src, - help="Directory to check out editable projects into. " - 'The default in a virtualenv is "/src". ' - 'The default for global installs is "/src".', -) - - -def _get_format_control(values: Values, option: Option) -> Any: - """Get a format_control object.""" - return getattr(values, option.dest) - - -def _handle_no_binary( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - existing = _get_format_control(parser.values, option) - FormatControl.handle_mutual_excludes( - value, - existing.no_binary, - existing.only_binary, - ) - - -def _handle_only_binary( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - existing = _get_format_control(parser.values, option) - FormatControl.handle_mutual_excludes( - value, - existing.only_binary, - existing.no_binary, - ) - - -def no_binary() -> Option: - format_control = FormatControl(set(), set()) - return Option( - "--no-binary", - dest="format_control", - action="callback", - callback=_handle_no_binary, - type="str", - default=format_control, - help="Do not use binary packages. Can be supplied multiple times, and " - 'each time adds to the existing value. Accepts either ":all:" to ' - 'disable all binary packages, ":none:" to empty the set (notice ' - "the colons), or one or more package names with commas between " - "them (no colons). Note that some packages are tricky to compile " - "and may fail to install when this option is used on them.", - ) - - -def only_binary() -> Option: - format_control = FormatControl(set(), set()) - return Option( - "--only-binary", - dest="format_control", - action="callback", - callback=_handle_only_binary, - type="str", - default=format_control, - help="Do not use source packages. Can be supplied multiple times, and " - 'each time adds to the existing value. Accepts either ":all:" to ' - 'disable all source packages, ":none:" to empty the set, or one ' - "or more package names with commas between them. Packages " - "without binary distributions will fail to install when this " - "option is used on them.", - ) - - -platforms: Callable[..., Option] = partial( - Option, - "--platform", - dest="platforms", - metavar="platform", - action="append", - default=None, - help=( - "Only use wheels compatible with . Defaults to the " - "platform of the running system. Use this option multiple times to " - "specify multiple platforms supported by the target interpreter." - ), -) - - -# This was made a separate function for unit-testing purposes. -def _convert_python_version(value: str) -> Tuple[Tuple[int, ...], Optional[str]]: - """ - Convert a version string like "3", "37", or "3.7.3" into a tuple of ints. - - :return: A 2-tuple (version_info, error_msg), where `error_msg` is - non-None if and only if there was a parsing error. - """ - if not value: - # The empty string is the same as not providing a value. - return (None, None) - - parts = value.split(".") - if len(parts) > 3: - return ((), "at most three version parts are allowed") - - if len(parts) == 1: - # Then we are in the case of "3" or "37". - value = parts[0] - if len(value) > 1: - parts = [value[0], value[1:]] - - try: - version_info = tuple(int(part) for part in parts) - except ValueError: - return ((), "each version part must be an integer") - - return (version_info, None) - - -def _handle_python_version( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - """ - Handle a provided --python-version value. - """ - version_info, error_msg = _convert_python_version(value) - if error_msg is not None: - msg = "invalid --python-version value: {!r}: {}".format( - value, - error_msg, - ) - raise_option_error(parser, option=option, msg=msg) - - parser.values.python_version = version_info - - -python_version: Callable[..., Option] = partial( - Option, - "--python-version", - dest="python_version", - metavar="python_version", - action="callback", - callback=_handle_python_version, - type="str", - default=None, - help=dedent( - """\ - The Python interpreter version to use for wheel and "Requires-Python" - compatibility checks. Defaults to a version derived from the running - interpreter. The version can be specified using up to three dot-separated - integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor - version can also be given as a string without dots (e.g. "37" for 3.7.0). - """ - ), -) - - -implementation: Callable[..., Option] = partial( - Option, - "--implementation", - dest="implementation", - metavar="implementation", - default=None, - help=( - "Only use wheels compatible with Python " - "implementation , e.g. 'pp', 'jy', 'cp', " - " or 'ip'. If not specified, then the current " - "interpreter implementation is used. Use 'py' to force " - "implementation-agnostic wheels." - ), -) - - -abis: Callable[..., Option] = partial( - Option, - "--abi", - dest="abis", - metavar="abi", - action="append", - default=None, - help=( - "Only use wheels compatible with Python abi , e.g. 'pypy_41'. " - "If not specified, then the current interpreter abi tag is used. " - "Use this option multiple times to specify multiple abis supported " - "by the target interpreter. Generally you will need to specify " - "--implementation, --platform, and --python-version when using this " - "option." - ), -) - - -def add_target_python_options(cmd_opts: OptionGroup) -> None: - cmd_opts.add_option(platforms()) - cmd_opts.add_option(python_version()) - cmd_opts.add_option(implementation()) - cmd_opts.add_option(abis()) - - -def make_target_python(options: Values) -> TargetPython: - target_python = TargetPython( - platforms=options.platforms, - py_version_info=options.python_version, - abis=options.abis, - implementation=options.implementation, - ) - - return target_python - - -def prefer_binary() -> Option: - return Option( - "--prefer-binary", - dest="prefer_binary", - action="store_true", - default=False, - help="Prefer older binary packages over newer source packages.", - ) - - -cache_dir: Callable[..., Option] = partial( - PipOption, - "--cache-dir", - dest="cache_dir", - default=USER_CACHE_DIR, - metavar="dir", - type="path", - help="Store the cache data in .", -) - - -def _handle_no_cache_dir( - option: Option, opt: str, value: str, parser: OptionParser -) -> None: - """ - Process a value provided for the --no-cache-dir option. - - This is an optparse.Option callback for the --no-cache-dir option. - """ - # The value argument will be None if --no-cache-dir is passed via the - # command-line, since the option doesn't accept arguments. However, - # the value can be non-None if the option is triggered e.g. by an - # environment variable, like PIP_NO_CACHE_DIR=true. - if value is not None: - # Then parse the string value to get argument error-checking. - try: - strtobool(value) - except ValueError as exc: - raise_option_error(parser, option=option, msg=str(exc)) - - # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool() - # converted to 0 (like "false" or "no") caused cache_dir to be disabled - # rather than enabled (logic would say the latter). Thus, we disable - # the cache directory not just on values that parse to True, but (for - # backwards compatibility reasons) also on values that parse to False. - # In other words, always set it to False if the option is provided in - # some (valid) form. - parser.values.cache_dir = False - - -no_cache: Callable[..., Option] = partial( - Option, - "--no-cache-dir", - dest="cache_dir", - action="callback", - callback=_handle_no_cache_dir, - help="Disable the cache.", -) - -no_deps: Callable[..., Option] = partial( - Option, - "--no-deps", - "--no-dependencies", - dest="ignore_dependencies", - action="store_true", - default=False, - help="Don't install package dependencies.", -) - -ignore_requires_python: Callable[..., Option] = partial( - Option, - "--ignore-requires-python", - dest="ignore_requires_python", - action="store_true", - help="Ignore the Requires-Python information.", -) - -no_build_isolation: Callable[..., Option] = partial( - Option, - "--no-build-isolation", - dest="build_isolation", - action="store_false", - default=True, - help="Disable isolation when building a modern source distribution. " - "Build dependencies specified by PEP 518 must be already installed " - "if this option is used.", -) - -check_build_deps: Callable[..., Option] = partial( - Option, - "--check-build-dependencies", - dest="check_build_deps", - action="store_true", - default=False, - help="Check the build dependencies when PEP517 is used.", -) - - -def _handle_no_use_pep517( - option: Option, opt: str, value: str, parser: OptionParser -) -> None: - """ - Process a value provided for the --no-use-pep517 option. - - This is an optparse.Option callback for the no_use_pep517 option. - """ - # Since --no-use-pep517 doesn't accept arguments, the value argument - # will be None if --no-use-pep517 is passed via the command-line. - # However, the value can be non-None if the option is triggered e.g. - # by an environment variable, for example "PIP_NO_USE_PEP517=true". - if value is not None: - msg = """A value was passed for --no-use-pep517, - probably using either the PIP_NO_USE_PEP517 environment variable - or the "no-use-pep517" config file option. Use an appropriate value - of the PIP_USE_PEP517 environment variable or the "use-pep517" - config file option instead. - """ - raise_option_error(parser, option=option, msg=msg) - - # If user doesn't wish to use pep517, we check if setuptools and wheel are installed - # and raise error if it is not. - packages = ("setuptools", "wheel") - if not all(importlib.util.find_spec(package) for package in packages): - msg = ( - f"It is not possible to use --no-use-pep517 " - f"without {' and '.join(packages)} installed." - ) - raise_option_error(parser, option=option, msg=msg) - - # Otherwise, --no-use-pep517 was passed via the command-line. - parser.values.use_pep517 = False - - -use_pep517: Any = partial( - Option, - "--use-pep517", - dest="use_pep517", - action="store_true", - default=None, - help="Use PEP 517 for building source distributions " - "(use --no-use-pep517 to force legacy behaviour).", -) - -no_use_pep517: Any = partial( - Option, - "--no-use-pep517", - dest="use_pep517", - action="callback", - callback=_handle_no_use_pep517, - default=None, - help=SUPPRESS_HELP, -) - - -def _handle_config_settings( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - key, sep, val = value.partition("=") - if sep != "=": - parser.error(f"Arguments to {opt_str} must be of the form KEY=VAL") # noqa - dest = getattr(parser.values, option.dest) - if dest is None: - dest = {} - setattr(parser.values, option.dest, dest) - if key in dest: - if isinstance(dest[key], list): - dest[key].append(val) - else: - dest[key] = [dest[key], val] - else: - dest[key] = val - - -config_settings: Callable[..., Option] = partial( - Option, - "-C", - "--config-settings", - dest="config_settings", - type=str, - action="callback", - callback=_handle_config_settings, - metavar="settings", - help="Configuration settings to be passed to the PEP 517 build backend. " - "Settings take the form KEY=VALUE. Use multiple --config-settings options " - "to pass multiple keys to the backend.", -) - -build_options: Callable[..., Option] = partial( - Option, - "--build-option", - dest="build_options", - metavar="options", - action="append", - help="Extra arguments to be supplied to 'setup.py bdist_wheel'.", -) - -global_options: Callable[..., Option] = partial( - Option, - "--global-option", - dest="global_options", - action="append", - metavar="options", - help="Extra global options to be supplied to the setup.py " - "call before the install or bdist_wheel command.", -) - -no_clean: Callable[..., Option] = partial( - Option, - "--no-clean", - action="store_true", - default=False, - help="Don't clean up build directories.", -) - -pre: Callable[..., Option] = partial( - Option, - "--pre", - action="store_true", - default=False, - help="Include pre-release and development versions. By default, " - "pip only finds stable versions.", -) - -disable_pip_version_check: Callable[..., Option] = partial( - Option, - "--disable-pip-version-check", - dest="disable_pip_version_check", - action="store_true", - default=False, - help="Don't periodically check PyPI to determine whether a new version " - "of pip is available for download. Implied with --no-index.", -) - -root_user_action: Callable[..., Option] = partial( - Option, - "--root-user-action", - dest="root_user_action", - default="warn", - choices=["warn", "ignore"], - help="Action if pip is run as a root user. By default, a warning message is shown.", -) - - -def _handle_merge_hash( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - """Given a value spelled "algo:digest", append the digest to a list - pointed to in a dict by the algo name.""" - if not parser.values.hashes: - parser.values.hashes = {} - try: - algo, digest = value.split(":", 1) - except ValueError: - parser.error( - "Arguments to {} must be a hash name " # noqa - "followed by a value, like --hash=sha256:" - "abcde...".format(opt_str) - ) - if algo not in STRONG_HASHES: - parser.error( - "Allowed hash algorithms for {} are {}.".format( # noqa - opt_str, ", ".join(STRONG_HASHES) - ) - ) - parser.values.hashes.setdefault(algo, []).append(digest) - - -hash: Callable[..., Option] = partial( - Option, - "--hash", - # Hash values eventually end up in InstallRequirement.hashes due to - # __dict__ copying in process_line(). - dest="hashes", - action="callback", - callback=_handle_merge_hash, - type="string", - help="Verify that the package's archive matches this " - "hash before installing. Example: --hash=sha256:abcdef...", -) - - -require_hashes: Callable[..., Option] = partial( - Option, - "--require-hashes", - dest="require_hashes", - action="store_true", - default=False, - help="Require a hash to check each requirement against, for " - "repeatable installs. This option is implied when any package in a " - "requirements file has a --hash option.", -) - - -list_path: Callable[..., Option] = partial( - PipOption, - "--path", - dest="path", - type="path", - action="append", - help="Restrict to the specified installation path for listing " - "packages (can be used multiple times).", -) - - -def check_list_path_option(options: Values) -> None: - if options.path and (options.user or options.local): - raise CommandError("Cannot combine '--path' with '--user' or '--local'") - - -list_exclude: Callable[..., Option] = partial( - PipOption, - "--exclude", - dest="excludes", - action="append", - metavar="package", - type="package_name", - help="Exclude specified package from the output", -) - - -no_python_version_warning: Callable[..., Option] = partial( - Option, - "--no-python-version-warning", - dest="no_python_version_warning", - action="store_true", - default=False, - help="Silence deprecation warnings for upcoming unsupported Pythons.", -) - - -# Features that are now always on. A warning is printed if they are used. -ALWAYS_ENABLED_FEATURES = [ - "no-binary-enable-wheel-cache", # always on since 23.1 -] - -use_new_feature: Callable[..., Option] = partial( - Option, - "--use-feature", - dest="features_enabled", - metavar="feature", - action="append", - default=[], - choices=[ - "fast-deps", - "truststore", - ] - + ALWAYS_ENABLED_FEATURES, - help="Enable new functionality, that may be backward incompatible.", -) - -use_deprecated_feature: Callable[..., Option] = partial( - Option, - "--use-deprecated", - dest="deprecated_features_enabled", - metavar="feature", - action="append", - default=[], - choices=[ - "legacy-resolver", - ], - help=("Enable deprecated functionality, that will be removed in the future."), -) - - -########## -# groups # -########## - -general_group: Dict[str, Any] = { - "name": "General Options", - "options": [ - help_, - debug_mode, - isolated_mode, - require_virtualenv, - python, - verbose, - version, - quiet, - log, - no_input, - keyring_provider, - proxy, - retries, - timeout, - exists_action, - trusted_host, - cert, - client_cert, - cache_dir, - no_cache, - disable_pip_version_check, - no_color, - no_python_version_warning, - use_new_feature, - use_deprecated_feature, - ], -} - -index_group: Dict[str, Any] = { - "name": "Package Index Options", - "options": [ - index_url, - extra_index_url, - no_index, - find_links, - ], -} diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py deleted file mode 100644 index f140e8b6db8fe4d373b539c0576ca28b45e837aa..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py +++ /dev/null @@ -1,179 +0,0 @@ -from __future__ import annotations - -import os -import sys -from abc import ABC, abstractmethod -from pathlib import Path - -if sys.version_info >= (3, 8): # pragma: no branch - from typing import Literal # pragma: no cover - - -class PlatformDirsABC(ABC): - """ - Abstract base class for platform directories. - """ - - def __init__( - self, - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, - multipath: bool = False, - opinion: bool = True, - ensure_exists: bool = False, - ): - """ - Create a new platform directory. - - :param appname: See `appname`. - :param appauthor: See `appauthor`. - :param version: See `version`. - :param roaming: See `roaming`. - :param multipath: See `multipath`. - :param opinion: See `opinion`. - :param ensure_exists: See `ensure_exists`. - """ - self.appname = appname #: The name of application. - self.appauthor = appauthor - """ - The name of the app author or distributing body for this application. Typically, it is the owning company name. - Defaults to `appname`. You may pass ``False`` to disable it. - """ - self.version = version - """ - An optional version path element to append to the path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this would typically be ``.``. - """ - self.roaming = roaming - """ - Whether to use the roaming appdata directory on Windows. That means that for users on a Windows network setup - for roaming profiles, this user data will be synced on login (see - `here `_). - """ - self.multipath = multipath - """ - An optional parameter only applicable to Unix/Linux which indicates that the entire list of data dirs should be - returned. By default, the first item would only be returned. - """ - self.opinion = opinion #: A flag to indicating to use opinionated values. - self.ensure_exists = ensure_exists - """ - Optionally create the directory (and any missing parents) upon access if it does not exist. - By default, no directories are created. - """ - - def _append_app_name_and_version(self, *base: str) -> str: - params = list(base[1:]) - if self.appname: - params.append(self.appname) - if self.version: - params.append(self.version) - path = os.path.join(base[0], *params) - self._optionally_create_directory(path) - return path - - def _optionally_create_directory(self, path: str) -> None: - if self.ensure_exists: - Path(path).mkdir(parents=True, exist_ok=True) - - @property - @abstractmethod - def user_data_dir(self) -> str: - """:return: data directory tied to the user""" - - @property - @abstractmethod - def site_data_dir(self) -> str: - """:return: data directory shared by users""" - - @property - @abstractmethod - def user_config_dir(self) -> str: - """:return: config directory tied to the user""" - - @property - @abstractmethod - def site_config_dir(self) -> str: - """:return: config directory shared by the users""" - - @property - @abstractmethod - def user_cache_dir(self) -> str: - """:return: cache directory tied to the user""" - - @property - @abstractmethod - def site_cache_dir(self) -> str: - """:return: cache directory shared by users""" - - @property - @abstractmethod - def user_state_dir(self) -> str: - """:return: state directory tied to the user""" - - @property - @abstractmethod - def user_log_dir(self) -> str: - """:return: log directory tied to the user""" - - @property - @abstractmethod - def user_documents_dir(self) -> str: - """:return: documents directory tied to the user""" - - @property - @abstractmethod - def user_runtime_dir(self) -> str: - """:return: runtime directory tied to the user""" - - @property - def user_data_path(self) -> Path: - """:return: data path tied to the user""" - return Path(self.user_data_dir) - - @property - def site_data_path(self) -> Path: - """:return: data path shared by users""" - return Path(self.site_data_dir) - - @property - def user_config_path(self) -> Path: - """:return: config path tied to the user""" - return Path(self.user_config_dir) - - @property - def site_config_path(self) -> Path: - """:return: config path shared by the users""" - return Path(self.site_config_dir) - - @property - def user_cache_path(self) -> Path: - """:return: cache path tied to the user""" - return Path(self.user_cache_dir) - - @property - def site_cache_path(self) -> Path: - """:return: cache path shared by users""" - return Path(self.site_cache_dir) - - @property - def user_state_path(self) -> Path: - """:return: state path tied to the user""" - return Path(self.user_state_dir) - - @property - def user_log_path(self) -> Path: - """:return: log path tied to the user""" - return Path(self.user_log_dir) - - @property - def user_documents_path(self) -> Path: - """:return: documents path tied to the user""" - return Path(self.user_documents_dir) - - @property - def user_runtime_path(self) -> Path: - """:return: runtime path tied to the user""" - return Path(self.user_runtime_dir) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_asyncio.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_asyncio.py deleted file mode 100644 index 2e50cd7b40ef18e7f7ee56c0f528bf0ef88b167a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_asyncio.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright 2016 Étienne Bersac -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import functools -import sys -import typing as t -from asyncio import sleep - -from pip._vendor.tenacity import AttemptManager -from pip._vendor.tenacity import BaseRetrying -from pip._vendor.tenacity import DoAttempt -from pip._vendor.tenacity import DoSleep -from pip._vendor.tenacity import RetryCallState - -WrappedFnReturnT = t.TypeVar("WrappedFnReturnT") -WrappedFn = t.TypeVar("WrappedFn", bound=t.Callable[..., t.Awaitable[t.Any]]) - - -class AsyncRetrying(BaseRetrying): - sleep: t.Callable[[float], t.Awaitable[t.Any]] - - def __init__(self, sleep: t.Callable[[float], t.Awaitable[t.Any]] = sleep, **kwargs: t.Any) -> None: - super().__init__(**kwargs) - self.sleep = sleep - - async def __call__( # type: ignore[override] - self, fn: WrappedFn, *args: t.Any, **kwargs: t.Any - ) -> WrappedFnReturnT: - self.begin() - - retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) - while True: - do = self.iter(retry_state=retry_state) - if isinstance(do, DoAttempt): - try: - result = await fn(*args, **kwargs) - except BaseException: # noqa: B902 - retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type] - else: - retry_state.set_result(result) - elif isinstance(do, DoSleep): - retry_state.prepare_for_next_attempt() - await self.sleep(do) - else: - return do # type: ignore[no-any-return] - - def __iter__(self) -> t.Generator[AttemptManager, None, None]: - raise TypeError("AsyncRetrying object is not iterable") - - def __aiter__(self) -> "AsyncRetrying": - self.begin() - self._retry_state = RetryCallState(self, fn=None, args=(), kwargs={}) - return self - - async def __anext__(self) -> AttemptManager: - while True: - do = self.iter(retry_state=self._retry_state) - if do is None: - raise StopAsyncIteration - elif isinstance(do, DoAttempt): - return AttemptManager(retry_state=self._retry_state) - elif isinstance(do, DoSleep): - self._retry_state.prepare_for_next_attempt() - await self.sleep(do) - else: - raise StopAsyncIteration - - def wraps(self, fn: WrappedFn) -> WrappedFn: - fn = super().wraps(fn) - # Ensure wrapper is recognized as a coroutine function. - - @functools.wraps(fn) - async def async_wrapped(*args: t.Any, **kwargs: t.Any) -> t.Any: - return await fn(*args, **kwargs) - - # Preserve attributes - async_wrapped.retry = fn.retry # type: ignore[attr-defined] - async_wrapped.retry_with = fn.retry_with # type: ignore[attr-defined] - - return async_wrapped # type: ignore[return-value] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install.py deleted file mode 100644 index a38cddcda5380aac99bade87e2cdf95d4c99348a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install.py +++ /dev/null @@ -1,814 +0,0 @@ -"""distutils.command.install - -Implements the Distutils 'install' command.""" - -import sys -import os -import contextlib -import sysconfig -import itertools - -from distutils import log -from distutils.core import Command -from distutils.debug import DEBUG -from distutils.sysconfig import get_config_vars -from distutils.file_util import write_file -from distutils.util import convert_path, subst_vars, change_root -from distutils.util import get_platform -from distutils.errors import DistutilsOptionError, DistutilsPlatformError -from . import _framework_compat as fw -from .. import _collections - -from site import USER_BASE -from site import USER_SITE - -HAS_USER_SITE = True - -WINDOWS_SCHEME = { - 'purelib': '{base}/Lib/site-packages', - 'platlib': '{base}/Lib/site-packages', - 'headers': '{base}/Include/{dist_name}', - 'scripts': '{base}/Scripts', - 'data': '{base}', -} - -INSTALL_SCHEMES = { - 'posix_prefix': { - 'purelib': '{base}/lib/{implementation_lower}{py_version_short}/site-packages', - 'platlib': '{platbase}/{platlibdir}/{implementation_lower}' - '{py_version_short}/site-packages', - 'headers': '{base}/include/{implementation_lower}' - '{py_version_short}{abiflags}/{dist_name}', - 'scripts': '{base}/bin', - 'data': '{base}', - }, - 'posix_home': { - 'purelib': '{base}/lib/{implementation_lower}', - 'platlib': '{base}/{platlibdir}/{implementation_lower}', - 'headers': '{base}/include/{implementation_lower}/{dist_name}', - 'scripts': '{base}/bin', - 'data': '{base}', - }, - 'nt': WINDOWS_SCHEME, - 'pypy': { - 'purelib': '{base}/site-packages', - 'platlib': '{base}/site-packages', - 'headers': '{base}/include/{dist_name}', - 'scripts': '{base}/bin', - 'data': '{base}', - }, - 'pypy_nt': { - 'purelib': '{base}/site-packages', - 'platlib': '{base}/site-packages', - 'headers': '{base}/include/{dist_name}', - 'scripts': '{base}/Scripts', - 'data': '{base}', - }, -} - -# user site schemes -if HAS_USER_SITE: - INSTALL_SCHEMES['nt_user'] = { - 'purelib': '{usersite}', - 'platlib': '{usersite}', - 'headers': '{userbase}/{implementation}{py_version_nodot_plat}' - '/Include/{dist_name}', - 'scripts': '{userbase}/{implementation}{py_version_nodot_plat}/Scripts', - 'data': '{userbase}', - } - - INSTALL_SCHEMES['posix_user'] = { - 'purelib': '{usersite}', - 'platlib': '{usersite}', - 'headers': '{userbase}/include/{implementation_lower}' - '{py_version_short}{abiflags}/{dist_name}', - 'scripts': '{userbase}/bin', - 'data': '{userbase}', - } - - -INSTALL_SCHEMES.update(fw.schemes) - - -# The keys to an installation scheme; if any new types of files are to be -# installed, be sure to add an entry to every installation scheme above, -# and to SCHEME_KEYS here. -SCHEME_KEYS = ('purelib', 'platlib', 'headers', 'scripts', 'data') - - -def _load_sysconfig_schemes(): - with contextlib.suppress(AttributeError): - return { - scheme: sysconfig.get_paths(scheme, expand=False) - for scheme in sysconfig.get_scheme_names() - } - - -def _load_schemes(): - """ - Extend default schemes with schemes from sysconfig. - """ - - sysconfig_schemes = _load_sysconfig_schemes() or {} - - return { - scheme: { - **INSTALL_SCHEMES.get(scheme, {}), - **sysconfig_schemes.get(scheme, {}), - } - for scheme in set(itertools.chain(INSTALL_SCHEMES, sysconfig_schemes)) - } - - -def _get_implementation(): - if hasattr(sys, 'pypy_version_info'): - return 'PyPy' - else: - return 'Python' - - -def _select_scheme(ob, name): - scheme = _inject_headers(name, _load_scheme(_resolve_scheme(name))) - vars(ob).update(_remove_set(ob, _scheme_attrs(scheme))) - - -def _remove_set(ob, attrs): - """ - Include only attrs that are None in ob. - """ - return {key: value for key, value in attrs.items() if getattr(ob, key) is None} - - -def _resolve_scheme(name): - os_name, sep, key = name.partition('_') - try: - resolved = sysconfig.get_preferred_scheme(key) - except Exception: - resolved = fw.scheme(_pypy_hack(name)) - return resolved - - -def _load_scheme(name): - return _load_schemes()[name] - - -def _inject_headers(name, scheme): - """ - Given a scheme name and the resolved scheme, - if the scheme does not include headers, resolve - the fallback scheme for the name and use headers - from it. pypa/distutils#88 - """ - # Bypass the preferred scheme, which may not - # have defined headers. - fallback = _load_scheme(_pypy_hack(name)) - scheme.setdefault('headers', fallback['headers']) - return scheme - - -def _scheme_attrs(scheme): - """Resolve install directories by applying the install schemes.""" - return {f'install_{key}': scheme[key] for key in SCHEME_KEYS} - - -def _pypy_hack(name): - PY37 = sys.version_info < (3, 8) - old_pypy = hasattr(sys, 'pypy_version_info') and PY37 - prefix = not name.endswith(('_user', '_home')) - pypy_name = 'pypy' + '_nt' * (os.name == 'nt') - return pypy_name if old_pypy and prefix else name - - -class install(Command): - - description = "install everything from build directory" - - user_options = [ - # Select installation scheme and set base director(y|ies) - ('prefix=', None, "installation prefix"), - ('exec-prefix=', None, "(Unix only) prefix for platform-specific files"), - ('home=', None, "(Unix only) home directory to install under"), - # Or, just set the base director(y|ies) - ( - 'install-base=', - None, - "base installation directory (instead of --prefix or --home)", - ), - ( - 'install-platbase=', - None, - "base installation directory for platform-specific files " - + "(instead of --exec-prefix or --home)", - ), - ('root=', None, "install everything relative to this alternate root directory"), - # Or, explicitly set the installation scheme - ( - 'install-purelib=', - None, - "installation directory for pure Python module distributions", - ), - ( - 'install-platlib=', - None, - "installation directory for non-pure module distributions", - ), - ( - 'install-lib=', - None, - "installation directory for all module distributions " - + "(overrides --install-purelib and --install-platlib)", - ), - ('install-headers=', None, "installation directory for C/C++ headers"), - ('install-scripts=', None, "installation directory for Python scripts"), - ('install-data=', None, "installation directory for data files"), - # Byte-compilation options -- see install_lib.py for details, as - # these are duplicated from there (but only install_lib does - # anything with them). - ('compile', 'c', "compile .py to .pyc [default]"), - ('no-compile', None, "don't compile .py files"), - ( - 'optimize=', - 'O', - "also compile with optimization: -O1 for \"python -O\", " - "-O2 for \"python -OO\", and -O0 to disable [default: -O0]", - ), - # Miscellaneous control options - ('force', 'f', "force installation (overwrite any existing files)"), - ('skip-build', None, "skip rebuilding everything (for testing/debugging)"), - # Where to install documentation (eventually!) - # ('doc-format=', None, "format of documentation to generate"), - # ('install-man=', None, "directory for Unix man pages"), - # ('install-html=', None, "directory for HTML documentation"), - # ('install-info=', None, "directory for GNU info files"), - ('record=', None, "filename in which to record list of installed files"), - ] - - boolean_options = ['compile', 'force', 'skip-build'] - - if HAS_USER_SITE: - user_options.append( - ('user', None, "install in user site-package '%s'" % USER_SITE) - ) - boolean_options.append('user') - - negative_opt = {'no-compile': 'compile'} - - def initialize_options(self): - """Initializes options.""" - # High-level options: these select both an installation base - # and scheme. - self.prefix = None - self.exec_prefix = None - self.home = None - self.user = 0 - - # These select only the installation base; it's up to the user to - # specify the installation scheme (currently, that means supplying - # the --install-{platlib,purelib,scripts,data} options). - self.install_base = None - self.install_platbase = None - self.root = None - - # These options are the actual installation directories; if not - # supplied by the user, they are filled in using the installation - # scheme implied by prefix/exec-prefix/home and the contents of - # that installation scheme. - self.install_purelib = None # for pure module distributions - self.install_platlib = None # non-pure (dists w/ extensions) - self.install_headers = None # for C/C++ headers - self.install_lib = None # set to either purelib or platlib - self.install_scripts = None - self.install_data = None - self.install_userbase = USER_BASE - self.install_usersite = USER_SITE - - self.compile = None - self.optimize = None - - # Deprecated - # These two are for putting non-packagized distributions into their - # own directory and creating a .pth file if it makes sense. - # 'extra_path' comes from the setup file; 'install_path_file' can - # be turned off if it makes no sense to install a .pth file. (But - # better to install it uselessly than to guess wrong and not - # install it when it's necessary and would be used!) Currently, - # 'install_path_file' is always true unless some outsider meddles - # with it. - self.extra_path = None - self.install_path_file = 1 - - # 'force' forces installation, even if target files are not - # out-of-date. 'skip_build' skips running the "build" command, - # handy if you know it's not necessary. 'warn_dir' (which is *not* - # a user option, it's just there so the bdist_* commands can turn - # it off) determines whether we warn about installing to a - # directory not in sys.path. - self.force = 0 - self.skip_build = 0 - self.warn_dir = 1 - - # These are only here as a conduit from the 'build' command to the - # 'install_*' commands that do the real work. ('build_base' isn't - # actually used anywhere, but it might be useful in future.) They - # are not user options, because if the user told the install - # command where the build directory is, that wouldn't affect the - # build command. - self.build_base = None - self.build_lib = None - - # Not defined yet because we don't know anything about - # documentation yet. - # self.install_man = None - # self.install_html = None - # self.install_info = None - - self.record = None - - # -- Option finalizing methods ------------------------------------- - # (This is rather more involved than for most commands, - # because this is where the policy for installing third- - # party Python modules on various platforms given a wide - # array of user input is decided. Yes, it's quite complex!) - - def finalize_options(self): # noqa: C901 - """Finalizes options.""" - # This method (and its helpers, like 'finalize_unix()', - # 'finalize_other()', and 'select_scheme()') is where the default - # installation directories for modules, extension modules, and - # anything else we care to install from a Python module - # distribution. Thus, this code makes a pretty important policy - # statement about how third-party stuff is added to a Python - # installation! Note that the actual work of installation is done - # by the relatively simple 'install_*' commands; they just take - # their orders from the installation directory options determined - # here. - - # Check for errors/inconsistencies in the options; first, stuff - # that's wrong on any platform. - - if (self.prefix or self.exec_prefix or self.home) and ( - self.install_base or self.install_platbase - ): - raise DistutilsOptionError( - "must supply either prefix/exec-prefix/home or " - + "install-base/install-platbase -- not both" - ) - - if self.home and (self.prefix or self.exec_prefix): - raise DistutilsOptionError( - "must supply either home or prefix/exec-prefix -- not both" - ) - - if self.user and ( - self.prefix - or self.exec_prefix - or self.home - or self.install_base - or self.install_platbase - ): - raise DistutilsOptionError( - "can't combine user with prefix, " - "exec_prefix/home, or install_(plat)base" - ) - - # Next, stuff that's wrong (or dubious) only on certain platforms. - if os.name != "posix": - if self.exec_prefix: - self.warn("exec-prefix option ignored on this platform") - self.exec_prefix = None - - # Now the interesting logic -- so interesting that we farm it out - # to other methods. The goal of these methods is to set the final - # values for the install_{lib,scripts,data,...} options, using as - # input a heady brew of prefix, exec_prefix, home, install_base, - # install_platbase, user-supplied versions of - # install_{purelib,platlib,lib,scripts,data,...}, and the - # install schemes. Phew! - - self.dump_dirs("pre-finalize_{unix,other}") - - if os.name == 'posix': - self.finalize_unix() - else: - self.finalize_other() - - self.dump_dirs("post-finalize_{unix,other}()") - - # Expand configuration variables, tilde, etc. in self.install_base - # and self.install_platbase -- that way, we can use $base or - # $platbase in the other installation directories and not worry - # about needing recursive variable expansion (shudder). - - py_version = sys.version.split()[0] - (prefix, exec_prefix) = get_config_vars('prefix', 'exec_prefix') - try: - abiflags = sys.abiflags - except AttributeError: - # sys.abiflags may not be defined on all platforms. - abiflags = '' - local_vars = { - 'dist_name': self.distribution.get_name(), - 'dist_version': self.distribution.get_version(), - 'dist_fullname': self.distribution.get_fullname(), - 'py_version': py_version, - 'py_version_short': '%d.%d' % sys.version_info[:2], - 'py_version_nodot': '%d%d' % sys.version_info[:2], - 'sys_prefix': prefix, - 'prefix': prefix, - 'sys_exec_prefix': exec_prefix, - 'exec_prefix': exec_prefix, - 'abiflags': abiflags, - 'platlibdir': getattr(sys, 'platlibdir', 'lib'), - 'implementation_lower': _get_implementation().lower(), - 'implementation': _get_implementation(), - } - - # vars for compatibility on older Pythons - compat_vars = dict( - # Python 3.9 and earlier - py_version_nodot_plat=getattr(sys, 'winver', '').replace('.', ''), - ) - - if HAS_USER_SITE: - local_vars['userbase'] = self.install_userbase - local_vars['usersite'] = self.install_usersite - - self.config_vars = _collections.DictStack( - [fw.vars(), compat_vars, sysconfig.get_config_vars(), local_vars] - ) - - self.expand_basedirs() - - self.dump_dirs("post-expand_basedirs()") - - # Now define config vars for the base directories so we can expand - # everything else. - local_vars['base'] = self.install_base - local_vars['platbase'] = self.install_platbase - - if DEBUG: - from pprint import pprint - - print("config vars:") - pprint(dict(self.config_vars)) - - # Expand "~" and configuration variables in the installation - # directories. - self.expand_dirs() - - self.dump_dirs("post-expand_dirs()") - - # Create directories in the home dir: - if self.user: - self.create_home_path() - - # Pick the actual directory to install all modules to: either - # install_purelib or install_platlib, depending on whether this - # module distribution is pure or not. Of course, if the user - # already specified install_lib, use their selection. - if self.install_lib is None: - if self.distribution.has_ext_modules(): # has extensions: non-pure - self.install_lib = self.install_platlib - else: - self.install_lib = self.install_purelib - - # Convert directories from Unix /-separated syntax to the local - # convention. - self.convert_paths( - 'lib', - 'purelib', - 'platlib', - 'scripts', - 'data', - 'headers', - 'userbase', - 'usersite', - ) - - # Deprecated - # Well, we're not actually fully completely finalized yet: we still - # have to deal with 'extra_path', which is the hack for allowing - # non-packagized module distributions (hello, Numerical Python!) to - # get their own directories. - self.handle_extra_path() - self.install_libbase = self.install_lib # needed for .pth file - self.install_lib = os.path.join(self.install_lib, self.extra_dirs) - - # If a new root directory was supplied, make all the installation - # dirs relative to it. - if self.root is not None: - self.change_roots( - 'libbase', 'lib', 'purelib', 'platlib', 'scripts', 'data', 'headers' - ) - - self.dump_dirs("after prepending root") - - # Find out the build directories, ie. where to install from. - self.set_undefined_options( - 'build', ('build_base', 'build_base'), ('build_lib', 'build_lib') - ) - - # Punt on doc directories for now -- after all, we're punting on - # documentation completely! - - def dump_dirs(self, msg): - """Dumps the list of user options.""" - if not DEBUG: - return - from distutils.fancy_getopt import longopt_xlate - - log.debug(msg + ":") - for opt in self.user_options: - opt_name = opt[0] - if opt_name[-1] == "=": - opt_name = opt_name[0:-1] - if opt_name in self.negative_opt: - opt_name = self.negative_opt[opt_name] - opt_name = opt_name.translate(longopt_xlate) - val = not getattr(self, opt_name) - else: - opt_name = opt_name.translate(longopt_xlate) - val = getattr(self, opt_name) - log.debug(" %s: %s", opt_name, val) - - def finalize_unix(self): - """Finalizes options for posix platforms.""" - if self.install_base is not None or self.install_platbase is not None: - incomplete_scheme = ( - ( - self.install_lib is None - and self.install_purelib is None - and self.install_platlib is None - ) - or self.install_headers is None - or self.install_scripts is None - or self.install_data is None - ) - if incomplete_scheme: - raise DistutilsOptionError( - "install-base or install-platbase supplied, but " - "installation scheme is incomplete" - ) - return - - if self.user: - if self.install_userbase is None: - raise DistutilsPlatformError("User base directory is not specified") - self.install_base = self.install_platbase = self.install_userbase - self.select_scheme("posix_user") - elif self.home is not None: - self.install_base = self.install_platbase = self.home - self.select_scheme("posix_home") - else: - if self.prefix is None: - if self.exec_prefix is not None: - raise DistutilsOptionError( - "must not supply exec-prefix without prefix" - ) - - # Allow Fedora to add components to the prefix - _prefix_addition = getattr(sysconfig, '_prefix_addition', "") - - self.prefix = os.path.normpath(sys.prefix) + _prefix_addition - self.exec_prefix = os.path.normpath(sys.exec_prefix) + _prefix_addition - - else: - if self.exec_prefix is None: - self.exec_prefix = self.prefix - - self.install_base = self.prefix - self.install_platbase = self.exec_prefix - self.select_scheme("posix_prefix") - - def finalize_other(self): - """Finalizes options for non-posix platforms""" - if self.user: - if self.install_userbase is None: - raise DistutilsPlatformError("User base directory is not specified") - self.install_base = self.install_platbase = self.install_userbase - self.select_scheme(os.name + "_user") - elif self.home is not None: - self.install_base = self.install_platbase = self.home - self.select_scheme("posix_home") - else: - if self.prefix is None: - self.prefix = os.path.normpath(sys.prefix) - - self.install_base = self.install_platbase = self.prefix - try: - self.select_scheme(os.name) - except KeyError: - raise DistutilsPlatformError( - "I don't know how to install stuff on '%s'" % os.name - ) - - def select_scheme(self, name): - _select_scheme(self, name) - - def _expand_attrs(self, attrs): - for attr in attrs: - val = getattr(self, attr) - if val is not None: - if os.name == 'posix' or os.name == 'nt': - val = os.path.expanduser(val) - val = subst_vars(val, self.config_vars) - setattr(self, attr, val) - - def expand_basedirs(self): - """Calls `os.path.expanduser` on install_base, install_platbase and - root.""" - self._expand_attrs(['install_base', 'install_platbase', 'root']) - - def expand_dirs(self): - """Calls `os.path.expanduser` on install dirs.""" - self._expand_attrs( - [ - 'install_purelib', - 'install_platlib', - 'install_lib', - 'install_headers', - 'install_scripts', - 'install_data', - ] - ) - - def convert_paths(self, *names): - """Call `convert_path` over `names`.""" - for name in names: - attr = "install_" + name - setattr(self, attr, convert_path(getattr(self, attr))) - - def handle_extra_path(self): - """Set `path_file` and `extra_dirs` using `extra_path`.""" - if self.extra_path is None: - self.extra_path = self.distribution.extra_path - - if self.extra_path is not None: - log.warn( - "Distribution option extra_path is deprecated. " - "See issue27919 for details." - ) - if isinstance(self.extra_path, str): - self.extra_path = self.extra_path.split(',') - - if len(self.extra_path) == 1: - path_file = extra_dirs = self.extra_path[0] - elif len(self.extra_path) == 2: - path_file, extra_dirs = self.extra_path - else: - raise DistutilsOptionError( - "'extra_path' option must be a list, tuple, or " - "comma-separated string with 1 or 2 elements" - ) - - # convert to local form in case Unix notation used (as it - # should be in setup scripts) - extra_dirs = convert_path(extra_dirs) - else: - path_file = None - extra_dirs = '' - - # XXX should we warn if path_file and not extra_dirs? (in which - # case the path file would be harmless but pointless) - self.path_file = path_file - self.extra_dirs = extra_dirs - - def change_roots(self, *names): - """Change the install directories pointed by name using root.""" - for name in names: - attr = "install_" + name - setattr(self, attr, change_root(self.root, getattr(self, attr))) - - def create_home_path(self): - """Create directories under ~.""" - if not self.user: - return - home = convert_path(os.path.expanduser("~")) - for name, path in self.config_vars.items(): - if str(path).startswith(home) and not os.path.isdir(path): - self.debug_print("os.makedirs('%s', 0o700)" % path) - os.makedirs(path, 0o700) - - # -- Command execution methods ------------------------------------- - - def run(self): - """Runs the command.""" - # Obviously have to build before we can install - if not self.skip_build: - self.run_command('build') - # If we built for any other platform, we can't install. - build_plat = self.distribution.get_command_obj('build').plat_name - # check warn_dir - it is a clue that the 'install' is happening - # internally, and not to sys.path, so we don't check the platform - # matches what we are running. - if self.warn_dir and build_plat != get_platform(): - raise DistutilsPlatformError("Can't install when " "cross-compiling") - - # Run all sub-commands (at least those that need to be run) - for cmd_name in self.get_sub_commands(): - self.run_command(cmd_name) - - if self.path_file: - self.create_path_file() - - # write list of installed files, if requested. - if self.record: - outputs = self.get_outputs() - if self.root: # strip any package prefix - root_len = len(self.root) - for counter in range(len(outputs)): - outputs[counter] = outputs[counter][root_len:] - self.execute( - write_file, - (self.record, outputs), - "writing list of installed files to '%s'" % self.record, - ) - - sys_path = map(os.path.normpath, sys.path) - sys_path = map(os.path.normcase, sys_path) - install_lib = os.path.normcase(os.path.normpath(self.install_lib)) - if ( - self.warn_dir - and not (self.path_file and self.install_path_file) - and install_lib not in sys_path - ): - log.debug( - ( - "modules installed to '%s', which is not in " - "Python's module search path (sys.path) -- " - "you'll have to change the search path yourself" - ), - self.install_lib, - ) - - def create_path_file(self): - """Creates the .pth file""" - filename = os.path.join(self.install_libbase, self.path_file + ".pth") - if self.install_path_file: - self.execute( - write_file, (filename, [self.extra_dirs]), "creating %s" % filename - ) - else: - self.warn("path file '%s' not created" % filename) - - # -- Reporting methods --------------------------------------------- - - def get_outputs(self): - """Assembles the outputs of all the sub-commands.""" - outputs = [] - for cmd_name in self.get_sub_commands(): - cmd = self.get_finalized_command(cmd_name) - # Add the contents of cmd.get_outputs(), ensuring - # that outputs doesn't contain duplicate entries - for filename in cmd.get_outputs(): - if filename not in outputs: - outputs.append(filename) - - if self.path_file and self.install_path_file: - outputs.append(os.path.join(self.install_libbase, self.path_file + ".pth")) - - return outputs - - def get_inputs(self): - """Returns the inputs of all the sub-commands""" - # XXX gee, this looks familiar ;-( - inputs = [] - for cmd_name in self.get_sub_commands(): - cmd = self.get_finalized_command(cmd_name) - inputs.extend(cmd.get_inputs()) - - return inputs - - # -- Predicates for sub-command list ------------------------------- - - def has_lib(self): - """Returns true if the current distribution has any Python - modules to install.""" - return ( - self.distribution.has_pure_modules() or self.distribution.has_ext_modules() - ) - - def has_headers(self): - """Returns true if the current distribution has any headers to - install.""" - return self.distribution.has_headers() - - def has_scripts(self): - """Returns true if the current distribution has any scripts to. - install.""" - return self.distribution.has_scripts() - - def has_data(self): - """Returns true if the current distribution has any data to. - install.""" - return self.distribution.has_data_files() - - # 'sub_commands': a list of commands this command might have to run to - # get its work done. See cmd.py for more info. - sub_commands = [ - ('install_lib', has_lib), - ('install_headers', has_headers), - ('install_scripts', has_scripts), - ('install_data', has_data), - ('install_egg_info', lambda self: True), - ] diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/models.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/models.py deleted file mode 100644 index a8e1e9fae32b71e85c6eae9646d35f79cef00f4b..0000000000000000000000000000000000000000 --- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/language_model.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/language_model.py deleted file mode 100644 index fe08b934acdb2ab8e4c816df7fc10e3d431e3aaa..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/language_model.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import torch.nn as nn -from torch.autograd import Variable -import numpy as np - - -class WordEmbedding(nn.Module): - """Word Embedding - - The ntoken-th dim is used for padding_idx, which agrees *implicitly* - with the definition in Dictionary. - """ - def __init__(self, ntoken, emb_dim, dropout): - super(WordEmbedding, self).__init__() - self.emb = nn.Embedding(ntoken+1, emb_dim, padding_idx=ntoken) - self.dropout = nn.Dropout(dropout) - self.ntoken = ntoken - self.emb_dim = emb_dim - - def init_embedding(self, np_file): - weight_init = torch.from_numpy(np.load(np_file)) - assert weight_init.shape == (self.ntoken, self.emb_dim) - self.emb.weight.data[:self.ntoken] = weight_init - - def forward(self, x): - emb = self.emb(x) - emb = self.dropout(emb) - return emb - - -class QuestionEmbedding(nn.Module): - def __init__(self, in_dim, num_hid, nlayers, bidirect, dropout, rnn_type='GRU'): - """Module for question embedding - """ - super(QuestionEmbedding, self).__init__() - assert rnn_type == 'LSTM' or rnn_type == 'GRU' - rnn_cls = nn.LSTM if rnn_type == 'LSTM' else nn.GRU - - self.rnn = rnn_cls( - in_dim, num_hid, nlayers, - bidirectional=bidirect, - dropout=dropout, - batch_first=True) - - self.in_dim = in_dim - self.num_hid = num_hid - self.nlayers = nlayers - self.rnn_type = rnn_type - self.ndirections = 1 + int(bidirect) - - def init_hidden(self, batch): - # just to get the type of tensor - weight = next(self.parameters()).data - hid_shape = (self.nlayers * self.ndirections, batch, self.num_hid) - if self.rnn_type == 'LSTM': - return (Variable(weight.new(*hid_shape).zero_()), - Variable(weight.new(*hid_shape).zero_())) - else: - return Variable(weight.new(*hid_shape).zero_()) - - def forward(self, x): - # x: [batch, sequence, in_dim] - batch = x.size(0) - hidden = self.init_hidden(batch) - self.rnn.flatten_parameters() - output, hidden = self.rnn(x, hidden) - - if self.ndirections == 1: - return output[:, -1] - - forward_ = output[:, -1, :self.num_hid] - backward = output[:, 0, self.num_hid:] - return torch.cat((forward_, backward), dim=1) - - def forward_all(self, x): - # x: [batch, sequence, in_dim] - batch = x.size(0) - hidden = self.init_hidden(batch) - self.rnn.flatten_parameters() - output, hidden = self.rnn(x, hidden) - return output diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/tda.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/tda.py deleted file mode 100644 index 8e7d2916ed103c22c52ea087d18221771f6a2077..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/tda.py +++ /dev/null @@ -1,97 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Zhenwei Shao https://github.com/ParadoxZW -# based on the implementation in https://github.com/hengyuan-hu/bottom-up-attention-vqa -# ELU is chosen as the activation function in non-linear layers due to -# the experiment results that indicate ELU is better than ReLU in BUTD model. -# -------------------------------------------------------- - -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.utils.weight_norm import weight_norm -import torch -import math - -# ------------------------------ -# ----- Weight Normal MLP ------ -# ------------------------------ - -class MLP(nn.Module): - """ - class for non-linear fully connect network - """ - - def __init__(self, dims, act='ELU', dropout_r=0.0): - super(MLP, self).__init__() - - layers = [] - for i in range(len(dims) - 1): - in_dim = dims[i] - out_dim = dims[i + 1] - if dropout_r > 0: - layers.append(nn.Dropout(dropout_r)) - layers.append(weight_norm(nn.Linear(in_dim, out_dim), dim=None)) - if act != '': - layers.append(getattr(nn, act)()) - - self.mlp = nn.Sequential(*layers) - - def forward(self, x): - return self.mlp(x) - -# ------------------------------ -# ---Top Down Attention Map ---- -# ------------------------------ - - -class AttnMap(nn.Module): - ''' - implementation of top down attention - ''' - def __init__(self, __C): - super(AttnMap, self).__init__() - self.__C = __C - self.linear_q = weight_norm( - nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE), dim=None) - self.linear_v = weight_norm( - nn.Linear(__C.IMG_FEAT_SIZE, __C.IMG_FEAT_SIZE), dim=None) - self.nonlinear = MLP( - [__C.IMG_FEAT_SIZE + __C.HIDDEN_SIZE, __C.HIDDEN_SIZE], dropout_r=__C.DROPOUT_R) - self.linear = weight_norm(nn.Linear(__C.HIDDEN_SIZE, 1), dim=None) - - def forward(self, q, v): - v = self.linear_v(v) - q = self.linear_q(q) - logits = self.logits(q, v) - w = nn.functional.softmax(logits, 1) - return w - - def logits(self, q, v): - num_objs = v.size(1) - q = q.unsqueeze(1).repeat(1, num_objs, 1) - vq = torch.cat((v, q), 2) - joint_repr = self.nonlinear(vq) - logits = self.linear(joint_repr) - return logits - -# ------------------------------ -# ---- Attended Joint Map ------ -# ------------------------------ - - -class TDA(nn.Module): - def __init__(self, __C): - super(TDA, self).__init__() - - self.__C = __C - self.v_att = AttnMap(__C) - self.q_net = MLP([__C.HIDDEN_SIZE, __C.HIDDEN_SIZE]) - self.v_net = MLP([__C.IMG_FEAT_SIZE, __C.HIDDEN_SIZE]) - - def forward(self, q, v): - att = self.v_att(q, v) - atted_v = (att * v).sum(1) - q_repr = self.q_net(q) - v_repr = self.v_net(atted_v) - joint_repr = q_repr * v_repr - return joint_repr diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_pytypes.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_pytypes.cpp deleted file mode 100644 index 0f8d56410f0add1f2e341e06ea9560ab9e88d643..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_pytypes.cpp +++ /dev/null @@ -1,375 +0,0 @@ -/* - tests/test_pytypes.cpp -- Python type casters - - Copyright (c) 2017 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" - - -TEST_SUBMODULE(pytypes, m) { - // test_int - m.def("get_int", []{return py::int_(0);}); - // test_iterator - m.def("get_iterator", []{return py::iterator();}); - // test_iterable - m.def("get_iterable", []{return py::iterable();}); - // test_list - m.def("get_list", []() { - py::list list; - list.append("value"); - py::print("Entry at position 0:", list[0]); - list[0] = py::str("overwritten"); - list.insert(0, "inserted-0"); - list.insert(2, "inserted-2"); - return list; - }); - m.def("print_list", [](py::list list) { - int index = 0; - for (auto item : list) - py::print("list item {}: {}"_s.format(index++, item)); - }); - // test_none - m.def("get_none", []{return py::none();}); - m.def("print_none", [](py::none none) { - py::print("none: {}"_s.format(none)); - }); - - // test_set - m.def("get_set", []() { - py::set set; - set.add(py::str("key1")); - set.add("key2"); - set.add(std::string("key3")); - return set; - }); - m.def("print_set", [](py::set set) { - for (auto item : set) - py::print("key:", item); - }); - m.def("set_contains", [](py::set set, py::object key) { - return set.contains(key); - }); - m.def("set_contains", [](py::set set, const char* key) { - return set.contains(key); - }); - - // test_dict - m.def("get_dict", []() { return py::dict("key"_a="value"); }); - m.def("print_dict", [](py::dict dict) { - for (auto item : dict) - py::print("key: {}, value={}"_s.format(item.first, item.second)); - }); - m.def("dict_keyword_constructor", []() { - auto d1 = py::dict("x"_a=1, "y"_a=2); - auto d2 = py::dict("z"_a=3, **d1); - return d2; - }); - m.def("dict_contains", [](py::dict dict, py::object val) { - return dict.contains(val); - }); - m.def("dict_contains", [](py::dict dict, const char* val) { - return dict.contains(val); - }); - - // test_str - m.def("str_from_string", []() { return py::str(std::string("baz")); }); - m.def("str_from_bytes", []() { return py::str(py::bytes("boo", 3)); }); - m.def("str_from_object", [](const py::object& obj) { return py::str(obj); }); - m.def("repr_from_object", [](const py::object& obj) { return py::repr(obj); }); - - m.def("str_format", []() { - auto s1 = "{} + {} = {}"_s.format(1, 2, 3); - auto s2 = "{a} + {b} = {c}"_s.format("a"_a=1, "b"_a=2, "c"_a=3); - return py::make_tuple(s1, s2); - }); - - // test_bytes - m.def("bytes_from_string", []() { return py::bytes(std::string("foo")); }); - m.def("bytes_from_str", []() { return py::bytes(py::str("bar", 3)); }); - - // test_capsule - m.def("return_capsule_with_destructor", []() { - py::print("creating capsule"); - return py::capsule([]() { - py::print("destructing capsule"); - }); - }); - - m.def("return_capsule_with_destructor_2", []() { - py::print("creating capsule"); - return py::capsule((void *) 1234, [](void *ptr) { - py::print("destructing capsule: {}"_s.format((size_t) ptr)); - }); - }); - - m.def("return_capsule_with_name_and_destructor", []() { - auto capsule = py::capsule((void *) 1234, "pointer type description", [](PyObject *ptr) { - if (ptr) { - auto name = PyCapsule_GetName(ptr); - py::print("destructing capsule ({}, '{}')"_s.format( - (size_t) PyCapsule_GetPointer(ptr, name), name - )); - } - }); - void *contents = capsule; - py::print("created capsule ({}, '{}')"_s.format((size_t) contents, capsule.name())); - return capsule; - }); - - // test_accessors - m.def("accessor_api", [](py::object o) { - auto d = py::dict(); - - d["basic_attr"] = o.attr("basic_attr"); - - auto l = py::list(); - for (const auto &item : o.attr("begin_end")) { - l.append(item); - } - d["begin_end"] = l; - - d["operator[object]"] = o.attr("d")["operator[object]"_s]; - d["operator[char *]"] = o.attr("d")["operator[char *]"]; - - d["attr(object)"] = o.attr("sub").attr("attr_obj"); - d["attr(char *)"] = o.attr("sub").attr("attr_char"); - try { - o.attr("sub").attr("missing").ptr(); - } catch (const py::error_already_set &) { - d["missing_attr_ptr"] = "raised"_s; - } - try { - o.attr("missing").attr("doesn't matter"); - } catch (const py::error_already_set &) { - d["missing_attr_chain"] = "raised"_s; - } - - d["is_none"] = o.attr("basic_attr").is_none(); - - d["operator()"] = o.attr("func")(1); - d["operator*"] = o.attr("func")(*o.attr("begin_end")); - - // Test implicit conversion - py::list implicit_list = o.attr("begin_end"); - d["implicit_list"] = implicit_list; - py::dict implicit_dict = o.attr("__dict__"); - d["implicit_dict"] = implicit_dict; - - return d; - }); - - m.def("tuple_accessor", [](py::tuple existing_t) { - try { - existing_t[0] = 1; - } catch (const py::error_already_set &) { - // --> Python system error - // Only new tuples (refcount == 1) are mutable - auto new_t = py::tuple(3); - for (size_t i = 0; i < new_t.size(); ++i) { - new_t[i] = i; - } - return new_t; - } - return py::tuple(); - }); - - m.def("accessor_assignment", []() { - auto l = py::list(1); - l[0] = 0; - - auto d = py::dict(); - d["get"] = l[0]; - auto var = l[0]; - d["deferred_get"] = var; - l[0] = 1; - d["set"] = l[0]; - var = 99; // this assignment should not overwrite l[0] - d["deferred_set"] = l[0]; - d["var"] = var; - - return d; - }); - - // test_constructors - m.def("default_constructors", []() { - return py::dict( - "bytes"_a=py::bytes(), - "str"_a=py::str(), - "bool"_a=py::bool_(), - "int"_a=py::int_(), - "float"_a=py::float_(), - "tuple"_a=py::tuple(), - "list"_a=py::list(), - "dict"_a=py::dict(), - "set"_a=py::set() - ); - }); - - m.def("converting_constructors", [](py::dict d) { - return py::dict( - "bytes"_a=py::bytes(d["bytes"]), - "str"_a=py::str(d["str"]), - "bool"_a=py::bool_(d["bool"]), - "int"_a=py::int_(d["int"]), - "float"_a=py::float_(d["float"]), - "tuple"_a=py::tuple(d["tuple"]), - "list"_a=py::list(d["list"]), - "dict"_a=py::dict(d["dict"]), - "set"_a=py::set(d["set"]), - "memoryview"_a=py::memoryview(d["memoryview"]) - ); - }); - - m.def("cast_functions", [](py::dict d) { - // When converting between Python types, obj.cast() should be the same as T(obj) - return py::dict( - "bytes"_a=d["bytes"].cast(), - "str"_a=d["str"].cast(), - "bool"_a=d["bool"].cast(), - "int"_a=d["int"].cast(), - "float"_a=d["float"].cast(), - "tuple"_a=d["tuple"].cast(), - "list"_a=d["list"].cast(), - "dict"_a=d["dict"].cast(), - "set"_a=d["set"].cast(), - "memoryview"_a=d["memoryview"].cast() - ); - }); - - m.def("convert_to_pybind11_str", [](py::object o) { return py::str(o); }); - - m.def("get_implicit_casting", []() { - py::dict d; - d["char*_i1"] = "abc"; - const char *c2 = "abc"; - d["char*_i2"] = c2; - d["char*_e"] = py::cast(c2); - d["char*_p"] = py::str(c2); - - d["int_i1"] = 42; - int i = 42; - d["int_i2"] = i; - i++; - d["int_e"] = py::cast(i); - i++; - d["int_p"] = py::int_(i); - - d["str_i1"] = std::string("str"); - std::string s2("str1"); - d["str_i2"] = s2; - s2[3] = '2'; - d["str_e"] = py::cast(s2); - s2[3] = '3'; - d["str_p"] = py::str(s2); - - py::list l(2); - l[0] = 3; - l[1] = py::cast(6); - l.append(9); - l.append(py::cast(12)); - l.append(py::int_(15)); - - return py::dict( - "d"_a=d, - "l"_a=l - ); - }); - - // test_print - m.def("print_function", []() { - py::print("Hello, World!"); - py::print(1, 2.0, "three", true, std::string("-- multiple args")); - auto args = py::make_tuple("and", "a", "custom", "separator"); - py::print("*args", *args, "sep"_a="-"); - py::print("no new line here", "end"_a=" -- "); - py::print("next print"); - - auto py_stderr = py::module::import("sys").attr("stderr"); - py::print("this goes to stderr", "file"_a=py_stderr); - - py::print("flush", "flush"_a=true); - - py::print("{a} + {b} = {c}"_s.format("a"_a="py::print", "b"_a="str.format", "c"_a="this")); - }); - - m.def("print_failure", []() { py::print(42, UnregisteredType()); }); - - m.def("hash_function", [](py::object obj) { return py::hash(obj); }); - - m.def("test_number_protocol", [](py::object a, py::object b) { - py::list l; - l.append(a.equal(b)); - l.append(a.not_equal(b)); - l.append(a < b); - l.append(a <= b); - l.append(a > b); - l.append(a >= b); - l.append(a + b); - l.append(a - b); - l.append(a * b); - l.append(a / b); - l.append(a | b); - l.append(a & b); - l.append(a ^ b); - l.append(a >> b); - l.append(a << b); - return l; - }); - - m.def("test_list_slicing", [](py::list a) { - return a[py::slice(0, -1, 2)]; - }); - - m.def("test_memoryview_object", [](py::buffer b) { - return py::memoryview(b); - }); - - m.def("test_memoryview_buffer_info", [](py::buffer b) { - return py::memoryview(b.request()); - }); - - m.def("test_memoryview_from_buffer", [](bool is_unsigned) { - static const int16_t si16[] = { 3, 1, 4, 1, 5 }; - static const uint16_t ui16[] = { 2, 7, 1, 8 }; - if (is_unsigned) - return py::memoryview::from_buffer( - ui16, { 4 }, { sizeof(uint16_t) }); - else - return py::memoryview::from_buffer( - si16, { 5 }, { sizeof(int16_t) }); - }); - - m.def("test_memoryview_from_buffer_nativeformat", []() { - static const char* format = "@i"; - static const int32_t arr[] = { 4, 7, 5 }; - return py::memoryview::from_buffer( - arr, sizeof(int32_t), format, { 3 }, { sizeof(int32_t) }); - }); - - m.def("test_memoryview_from_buffer_empty_shape", []() { - static const char* buf = ""; - return py::memoryview::from_buffer(buf, 1, "B", { }, { }); - }); - - m.def("test_memoryview_from_buffer_invalid_strides", []() { - static const char* buf = "\x02\x03\x04"; - return py::memoryview::from_buffer(buf, 1, "B", { 3 }, { }); - }); - - m.def("test_memoryview_from_buffer_nullptr", []() { - return py::memoryview::from_buffer( - static_cast(nullptr), 1, "B", { }, { }); - }); - -#if PY_MAJOR_VERSION >= 3 - m.def("test_memoryview_from_memory", []() { - const char* buf = "\xff\xe1\xab\x37"; - return py::memoryview::from_memory( - buf, static_cast(strlen(buf))); - }); -#endif -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/pointer.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/pointer.h deleted file mode 100644 index 8efeb33c46d2f5034586ac31cdacc8145eed6246..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/pointer.h +++ /dev/null @@ -1,351 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace cpp -{ - -template class pointer; - -} // end cpp -} // end system -} // end thrust - - -/*! \cond - */ - -// specialize thrust::iterator_traits to avoid problems with the name of -// pointer's constructor shadowing its nested pointer type -// do this before pointer is defined so the specialization is correctly -// used inside the definition -namespace thrust -{ - -template - struct iterator_traits > -{ - private: - typedef thrust::system::cpp::pointer ptr; - - public: - typedef typename ptr::iterator_category iterator_category; - typedef typename ptr::value_type value_type; - typedef typename ptr::difference_type difference_type; - typedef ptr pointer; - typedef typename ptr::reference reference; -}; // end iterator_traits - -} // end thrust - -/*! \endcond - */ - - -namespace thrust -{ -namespace system -{ - -/*! \addtogroup system_backends Systems - * \ingroup system - * \{ - */ - -/*! \namespace thrust::system::cpp - * \brief \p thrust::system::cpp is the namespace containing functionality for allocating, manipulating, - * and deallocating memory available to Thrust's standard C++ backend system. - * The identifiers are provided in a separate namespace underneath thrust::system - * for import convenience but are also aliased in the top-level thrust::cpp - * namespace for easy access. - * - */ -namespace cpp -{ - -// forward declaration of reference for pointer -template class reference; - -/*! \cond - */ - -// XXX nvcc + msvc have trouble instantiating reference below -// this is a workaround -namespace detail -{ - -template - struct reference_msvc_workaround -{ - typedef thrust::system::cpp::reference type; -}; // end reference_msvc_workaround - -} // end detail - -/*! \endcond - */ - - -/*! \p pointer stores a pointer to an object allocated in memory available to the cpp system. - * This type provides type safety when dispatching standard algorithms on ranges resident - * in cpp memory. - * - * \p pointer has pointer semantics: it may be dereferenced and manipulated with pointer arithmetic. - * - * \p pointer can be created with the function \p cpp::malloc, or by explicitly calling its constructor - * with a raw pointer. - * - * The raw pointer encapsulated by a \p pointer may be obtained by eiter its get member function - * or the \p raw_pointer_cast function. - * - * \note \p pointer is not a "smart" pointer; it is the programmer's responsibility to deallocate memory - * pointed to by \p pointer. - * - * \tparam T specifies the type of the pointee. - * - * \see cpp::malloc - * \see cpp::free - * \see raw_pointer_cast - */ -template - class pointer - : public thrust::pointer< - T, - thrust::system::cpp::tag, - thrust::system::cpp::reference, - thrust::system::cpp::pointer - > -{ - /*! \cond - */ - - private: - typedef thrust::pointer< - T, - thrust::system::cpp::tag, - //thrust::system::cpp::reference, - typename detail::reference_msvc_workaround::type, - thrust::system::cpp::pointer - > super_t; - - /*! \endcond - */ - - public: - // note that cpp::pointer's member functions need __host__ __device__ - // to interoperate with nvcc + iterators' dereference member function - - /*! \p pointer's no-argument constructor initializes its encapsulated pointer to \c 0. - */ - __host__ __device__ - pointer() : super_t() {} - - #if THRUST_CPP_DIALECT >= 2011 - // NOTE: This is needed so that Thrust smart pointers can be used in - // `std::unique_ptr`. - __host__ __device__ - pointer(decltype(nullptr)) : super_t(nullptr) {} - #endif - - /*! This constructor allows construction of a pointer from a T*. - * - * \param ptr A raw pointer to copy from, presumed to point to a location in memory - * accessible by the \p cpp system. - * \tparam OtherT \p OtherT shall be convertible to \p T. - */ - template - __host__ __device__ - explicit pointer(OtherT *ptr) : super_t(ptr) {} - - /*! This constructor allows construction from another pointer-like object with related type. - * - * \param other The \p OtherPointer to copy. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::cpp::tag and its element type shall be convertible to \p T. - */ - template - __host__ __device__ - pointer(const OtherPointer &other, - typename thrust::detail::enable_if_pointer_is_convertible< - OtherPointer, - pointer - >::type * = 0) : super_t(other) {} - - /*! This constructor allows construction from another pointer-like object with \p void type. - * - * \param other The \p OtherPointer to copy. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::cpp::tag and its element type shall be \p void. - */ - template - __host__ __device__ - explicit - pointer(const OtherPointer &other, - typename thrust::detail::enable_if_void_pointer_is_system_convertible< - OtherPointer, - pointer - >::type * = 0) : super_t(other) {} - - /*! Assignment operator allows assigning from another pointer-like object with related type. - * - * \param other The other pointer-like object to assign from. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::cpp::tag and its element type shall be convertible to \p T. - */ - template - __host__ __device__ - typename thrust::detail::enable_if_pointer_is_convertible< - OtherPointer, - pointer, - pointer & - >::type - operator=(const OtherPointer &other) - { - return super_t::operator=(other); - } - - #if THRUST_CPP_DIALECT >= 2011 - // NOTE: This is needed so that Thrust smart pointers can be used in - // `std::unique_ptr`. - __host__ __device__ - pointer& operator=(decltype(nullptr)) - { - super_t::operator=(nullptr); - return *this; - } - #endif -}; // end pointer - -/*! \p reference is a wrapped reference to an object stored in memory available to the \p cpp system. - * \p reference is the type of the result of dereferencing a \p cpp::pointer. - * - * \tparam T Specifies the type of the referenced object. - */ -template - class reference - : public thrust::reference< - T, - thrust::system::cpp::pointer, - thrust::system::cpp::reference - > -{ - /*! \cond - */ - - private: - typedef thrust::reference< - T, - thrust::system::cpp::pointer, - thrust::system::cpp::reference - > super_t; - - /*! \endcond - */ - - public: - /*! \cond - */ - - typedef typename super_t::value_type value_type; - typedef typename super_t::pointer pointer; - - /*! \endcond - */ - - /*! This constructor initializes this \p reference to refer to an object - * pointed to by the given \p pointer. After this \p reference is constructed, - * it shall refer to the object pointed to by \p ptr. - * - * \param ptr A \p pointer to copy from. - */ - __host__ __device__ - explicit reference(const pointer &ptr) - : super_t(ptr) - {} - - /*! This constructor accepts a const reference to another \p reference of related type. - * After this \p reference is constructed, it shall refer to the same object as \p other. - * - * \param other A \p reference to copy from. - * \tparam OtherT The element type of the other \p reference. - * - * \note This constructor is templated primarily to allow initialization of reference - * from reference. - */ - template - __host__ __device__ - reference(const reference &other, - typename thrust::detail::enable_if_convertible< - typename reference::pointer, - pointer - >::type * = 0) - : super_t(other) - {} - - /*! Copy assignment operator copy assigns from another \p reference of related type. - * - * \param other The other \p reference to assign from. - * \return *this - * \tparam OtherT The element type of the other \p reference. - */ - template - reference &operator=(const reference &other); - - /*! Assignment operator assigns from a \p value_type. - * - * \param x The \p value_type to assign from. - * \return *this - */ - reference &operator=(const value_type &x); -}; // end reference - -/*! Exchanges the values of two objects referred to by \p reference. - * \p x The first \p reference of interest. - * \p y The second \p reference of interest. - */ -template -__host__ __device__ -void swap(reference x, reference y); - -} // end cpp - -/*! \} - */ - -} // end system - -namespace cpp -{ - -using thrust::system::cpp::pointer; -using thrust::system::cpp::reference; - -} // end cpp - -} // end thrust - -#include diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/inner_product.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/inner_product.h deleted file mode 100644 index bd6aec606c16e5eb4c5aa3276b7d374647b021cd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/inner_product.h +++ /dev/null @@ -1,94 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include -#include -#include - -namespace thrust -{ - -namespace cuda_cub { - -template -T __host__ __device__ -inner_product(execution_policy &policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2, - T init, - ReduceOp reduce_op, - ProductOp product_op) -{ - typedef typename iterator_traits::difference_type size_type; - size_type num_items = static_cast(thrust::distance(first1, last1)); - typedef transform_pair_of_input_iterators_t - binop_iterator_t; - - return cuda_cub::reduce_n(policy, - binop_iterator_t(first1, first2, product_op), - num_items, - init, - reduce_op); -} - -template -T __host__ __device__ -inner_product(execution_policy &policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2, - T init) -{ - return cuda_cub::inner_product(policy, - first1, - last1, - first2, - init, - plus(), - multiplies()); -} - -} // namespace cuda_cub - -} // end namespace thrust -#endif diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory.h deleted file mode 100644 index 9b2f070ccd4139d5f535d47d1b685b7f397ba330..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory.h +++ /dev/null @@ -1,95 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file thrust/system/omp/memory.h - * \brief Managing memory associated with Thrust's OpenMP system. - */ - -#pragma once - -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace omp -{ - -/*! Allocates an area of memory available to Thrust's omp system. - * \param n Number of bytes to allocate. - * \return A omp::pointer pointing to the beginning of the newly - * allocated memory. A null omp::pointer is returned if - * an error occurs. - * \note The omp::pointer returned by this function must be - * deallocated with \p omp::free. - * \see omp::free - * \see std::malloc - */ -inline pointer malloc(std::size_t n); - -/*! Allocates a typed area of memory available to Thrust's omp system. - * \param n Number of elements to allocate. - * \return A omp::pointer pointing to the beginning of the newly - * allocated memory. A null omp::pointer is returned if - * an error occurs. - * \note The omp::pointer returned by this function must be - * deallocated with \p omp::free. - * \see omp::free - * \see std::malloc - */ -template -inline pointer malloc(std::size_t n); - -/*! Deallocates an area of memory previously allocated by omp::malloc. - * \param ptr A omp::pointer pointing to the beginning of an area - * of memory previously allocated with omp::malloc. - * \see omp::malloc - * \see std::free - */ -inline void free(pointer ptr); - -/*! \p omp::allocator is the default allocator used by the \p omp system's containers such as - * omp::vector if no user-specified allocator is provided. \p omp::allocator allocates - * (deallocates) storage with \p omp::malloc (\p omp::free). - */ -template -using allocator = thrust::mr::stateless_resource_allocator; - -} // end omp -} // end system - -/*! \namespace thrust::omp - * \brief \p thrust::omp is a top-level alias for thrust::system::omp. - */ -namespace omp -{ - -using thrust::system::omp::malloc; -using thrust::system::omp::free; -using thrust::system::omp::allocator; - -} // end omp - -} // end thrust - -#include - diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/single_stage.py b/spaces/CVPR/WALT/mmdet/models/detectors/single_stage.py deleted file mode 100644 index 5172bdbd945889445eeaa18398c9f0118bb845ad..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/single_stage.py +++ /dev/null @@ -1,154 +0,0 @@ -import torch -import torch.nn as nn - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class SingleStageDetector(BaseDetector): - """Base class for single-stage detectors. - - Single-stage detectors directly and densely predict bounding boxes on the - output features of the backbone+neck. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(SingleStageDetector, self).__init__() - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.init_weights(pretrained=pretrained) - - def init_weights(self, pretrained=None): - """Initialize the weights in detector. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(SingleStageDetector, self).init_weights(pretrained) - self.backbone.init_weights(pretrained=pretrained) - if self.with_neck: - if isinstance(self.neck, nn.Sequential): - for m in self.neck: - m.init_weights() - else: - self.neck.init_weights() - self.bbox_head.init_weights() - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - # get origin input shape to support onnx dynamic shape - if torch.onnx.is_in_onnx_export(): - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - # skip post-processing when exporting to ONNX - if torch.onnx.is_in_onnx_export(): - return bbox_list - - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in bbox_list - ] - return bbox_results - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - assert hasattr(self.bbox_head, 'aug_test'), \ - f'{self.bbox_head.__class__.__name__}' \ - ' does not support test-time augmentation' - - feats = self.extract_feats(imgs) - return [self.bbox_head.aug_test(feats, img_metas, rescale=rescale)] diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/sparse_rcnn.py b/spaces/CVPR/WALT/mmdet/models/detectors/sparse_rcnn.py deleted file mode 100644 index 0dbd0250f189e610a0bbc72b0dab2559e26857ae..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/sparse_rcnn.py +++ /dev/null @@ -1,110 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class SparseRCNN(TwoStageDetector): - r"""Implementation of `Sparse R-CNN: End-to-End Object Detection with - Learnable Proposals `_""" - - def __init__(self, *args, **kwargs): - super(SparseRCNN, self).__init__(*args, **kwargs) - assert self.with_rpn, 'Sparse R-CNN do not support external proposals' - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """Forward function of SparseR-CNN in train stage. - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (List[Tensor], optional) : Segmentation masks for - each box. But we don't support it in this architecture. - proposals (List[Tensor], optional): override rpn proposals with - custom proposals. Use when `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - assert proposals is None, 'Sparse R-CNN does not support' \ - ' external proposals' - assert gt_masks is None, 'Sparse R-CNN does not instance segmentation' - - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.forward_train(x, img_metas) - roi_losses = self.roi_head.forward_train( - x, - proposal_boxes, - proposal_features, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=gt_bboxes_ignore, - gt_masks=gt_masks, - imgs_whwh=imgs_whwh) - return roi_losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, img_metas) - bbox_results = self.roi_head.simple_test( - x, - proposal_boxes, - proposal_features, - img_metas, - imgs_whwh=imgs_whwh, - rescale=rescale) - return bbox_results - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - # backbone - x = self.extract_feat(img) - # rpn - num_imgs = len(img) - dummy_img_metas = [ - dict(img_shape=(800, 1333, 3)) for _ in range(num_imgs) - ] - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, dummy_img_metas) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposal_boxes, - proposal_features, - dummy_img_metas) - return roi_outs diff --git a/spaces/CVPR/WALT/walt/datasets/pipelines/instaboost.py b/spaces/CVPR/WALT/walt/datasets/pipelines/instaboost.py deleted file mode 100644 index 38b6819f60587a6e0c0f6d57bfda32bb3a7a4267..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/walt/datasets/pipelines/instaboost.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class InstaBoost(object): - r"""Data augmentation method in `InstaBoost: Boosting Instance - Segmentation Via Probability Map Guided Copy-Pasting - `_. - - Refer to https://github.com/GothicAi/Instaboost for implementation details. - """ - - def __init__(self, - action_candidate=('normal', 'horizontal', 'skip'), - action_prob=(1, 0, 0), - scale=(0.8, 1.2), - dx=15, - dy=15, - theta=(-1, 1), - color_prob=0.5, - hflag=False, - aug_ratio=0.5): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError( - 'Please run "pip install instaboostfast" ' - 'to install instaboostfast first for instaboost augmentation.') - self.cfg = instaboost.InstaBoostConfig(action_candidate, action_prob, - scale, dx, dy, theta, - color_prob, hflag) - self.aug_ratio = aug_ratio - - def _load_anns(self, results): - labels = results['ann_info']['labels'] - masks = results['ann_info']['masks'] - bboxes = results['ann_info']['bboxes'] - n = len(labels) - - anns = [] - for i in range(n): - label = labels[i] - bbox = bboxes[i] - mask = masks[i] - x1, y1, x2, y2 = bbox - # assert (x2 - x1) >= 1 and (y2 - y1) >= 1 - bbox = [x1, y1, x2 - x1, y2 - y1] - anns.append({ - 'category_id': label, - 'segmentation': mask, - 'bbox': bbox - }) - - return anns - - def _parse_anns(self, results, anns, img): - gt_bboxes = [] - gt_labels = [] - gt_masks_ann = [] - for ann in anns: - x1, y1, w, h = ann['bbox'] - # TODO: more essential bug need to be fixed in instaboost - if w <= 0 or h <= 0: - continue - bbox = [x1, y1, x1 + w, y1 + h] - gt_bboxes.append(bbox) - gt_labels.append(ann['category_id']) - gt_masks_ann.append(ann['segmentation']) - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - results['ann_info']['labels'] = gt_labels - results['ann_info']['bboxes'] = gt_bboxes - results['ann_info']['masks'] = gt_masks_ann - results['img'] = img - return results - - def __call__(self, results): - img = results['img'] - orig_type = img.dtype - anns = self._load_anns(results) - if np.random.choice([0, 1], p=[1 - self.aug_ratio, self.aug_ratio]): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError('Please run "pip install instaboostfast" ' - 'to install instaboostfast first.') - anns, img = instaboost.get_new_data( - anns, img.astype(np.uint8), self.cfg, background=None) - - results = self._parse_anns(results, anns, img.astype(orig_type)) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(cfg={self.cfg}, aug_ratio={self.aug_ratio})' - return repr_str diff --git a/spaces/CVPR/lama-example/saicinpainting/evaluation/losses/__init__.py b/spaces/CVPR/lama-example/saicinpainting/evaluation/losses/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/update.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/update.js deleted file mode 100644 index a33c30fcd613cbd39ce1a47037b319ebd2d7dcd3..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/update.js +++ /dev/null @@ -1,240 +0,0 @@ -import plugin from '../../lib/plugins/plugin.js' -import { createRequire } from 'module' -import lodash from 'lodash' -import fs from 'node:fs' -import { Restart } from './restart.js' -import common from '../../lib/common/common.js' - -const require = createRequire(import.meta.url) -const { exec, execSync } = require('child_process') - -let uping = false - -export class update extends plugin { - constructor() { - super({ - name: '更新', - dsc: '#更新 #强制更新', - event: 'message', - priority: 4000, - rule: [ - { - reg: '^#更新日志', - fnc: 'updateLog' - }, - { - reg: '^#(强制)?更新', - fnc: 'update' - }, - { - reg: '^#全部(强制)?更新$', - fnc: 'updateAll', - permission: 'master' - } - ] - }) - - this.typeName = 'TRSS-Yunzai' - } - - async update() { - if (!this.e.isMaster) return false - if (uping) return this.reply('已有命令更新中..请勿重复操作') - - if (/详细|详情|面板|面版/.test(this.e.msg)) return false - - /** 获取插件 */ - const plugin = this.getPlugin() - if (plugin === false) return false - - /** 执行更新 */ - await this.runUpdate(plugin) - - /** 是否需要重启 */ - if (this.isUp) { - // await this.reply('即将执行重启,以应用更新') - setTimeout(() => this.restart(), 2000) - } - } - - getPlugin(plugin = '') { - if (!plugin) { - plugin = this.e.msg.replace(/#(强制)?更新(日志)?/, '') - if (!plugin) return '' - } - - if (!fs.existsSync(`plugins/${plugin}/.git`)) return false - - this.typeName = plugin - return plugin - } - - async execSync(cmd) { - return new Promise((resolve, reject) => { - exec(cmd, { windowsHide: true }, (error, stdout, stderr) => { - resolve({ error, stdout, stderr }) - }) - }) - } - - async runUpdate(plugin = '') { - this.isNowUp = false - - let cm = 'git pull --no-rebase' - - let type = '更新' - if (this.e.msg.includes('强制')) { - type = '强制更新' - cm = `git reset --hard && git pull --rebase --allow-unrelated-histories` - } - if (plugin) cm = `cd "plugins/${plugin}" && ${cm}` - - this.oldCommitId = await this.getcommitId(plugin) - - logger.mark(`${this.e.logFnc} 开始${type}:${this.typeName}`) - - await this.reply(`开始${type} ${this.typeName}`) - uping = true - const ret = await this.execSync(cm) - uping = false - - if (ret.error) { - logger.mark(`${this.e.logFnc} 更新失败:${this.typeName}`) - this.gitErr(ret.error, ret.stdout) - return false - } - - const time = await this.getTime(plugin) - - if (/Already up|已经是最新/g.test(ret.stdout)) { - await this.reply(`${this.typeName} 已是最新\n最后更新时间:${time}`) - } else { - await this.reply(`${this.typeName} 更新成功\n更新时间:${time}`) - this.isUp = true - await this.reply(await this.getLog(plugin)) - } - - logger.mark(`${this.e.logFnc} 最后更新时间:${time}`) - return true - } - - async getcommitId(plugin = '') { - let cm = 'git rev-parse --short HEAD' - if (plugin) cm = `cd "plugins/${plugin}" && ${cm}` - - const commitId = await execSync(cm, { encoding: 'utf-8' }) - return lodash.trim(commitId) - } - - async getTime(plugin = '') { - let cm = 'git log -1 --pretty=%cd --date=format:"%F %T"' - if (plugin) cm = `cd "plugins/${plugin}" && ${cm}` - - let time = '' - try { - time = await execSync(cm, { encoding: 'utf-8' }) - time = lodash.trim(time) - } catch (error) { - logger.error(error.toString()) - time = '获取时间失败' - } - - return time - } - - async gitErr(err, stdout) { - const msg = '更新失败!' - const errMsg = err.toString() - stdout = stdout.toString() - - if (errMsg.includes('Timed out')) { - const remote = errMsg.match(/'(.+?)'/g)[0].replace(/'/g, '') - return this.reply(`${msg}\n连接超时:${remote}`) - } - - if (/Failed to connect|unable to access/g.test(errMsg)) { - const remote = errMsg.match(/'(.+?)'/g)[0].replace(/'/g, '') - return this.reply(`${msg}\n连接失败:${remote}`) - } - - if (errMsg.includes('be overwritten by merge')) { - return this.reply(`${msg}\n存在冲突:\n${errMsg}\n请解决冲突后再更新,或者执行#强制更新,放弃本地修改`) - } - - if (stdout.includes('CONFLICT')) { - return this.reply(`${msg}\n存在冲突:\n${errMsg}${stdout}\n请解决冲突后再更新,或者执行#强制更新,放弃本地修改`) - } - - return this.reply([errMsg, stdout]) - } - - async updateAll() { - const dirs = fs.readdirSync('./plugins/') - - await this.runUpdate() - - for (let plu of dirs) { - plu = this.getPlugin(plu) - if (plu === false) continue - await common.sleep(1500) - await this.runUpdate(plu) - } - - if (this.isUp) { - // await this.reply('即将执行重启,以应用更新') - setTimeout(() => this.restart(), 2000) - } - } - - restart() { - new Restart(this.e).restart() - } - - async getLog(plugin = '') { - let cm = 'git log -100 --pretty="%h||[%cd] %s" --date=format:"%F %T"' - if (plugin) cm = `cd "plugins/${plugin}" && ${cm}` - - let logAll - try { - logAll = await execSync(cm, { encoding: 'utf-8' }) - } catch (error) { - logger.error(error.toString()) - await this.reply(error.toString()) - } - - if (!logAll) return false - - logAll = logAll.trim().split('\n') - - let log = [] - for (let str of logAll) { - str = str.split('||') - if (str[0] == this.oldCommitId) break - if (str[1].includes('Merge branch')) continue - log.push(str[1]) - } - let line = log.length - log = log.join('\n\n') - - if (log.length <= 0) return '' - - let end = '' - try { - cm = 'git config -l' - if (plugin) cm = `cd "plugins/${plugin}" && ${cm}` - end = await execSync(cm, { encoding: 'utf-8' }) - end = end.match(/remote\..*\.url=.+/g).join('\n\n').replace(/remote\..*\.url=/g, '').replace(/\/\/([^@]+)@/, '//') - } catch (error) { - logger.error(error.toString()) - await this.reply(error.toString()) - } - - return common.makeForwardMsg(this.e, [log, end], `${plugin || 'TRSS-Yunzai'} 更新日志,共${line}条`) - } - - async updateLog() { - const plugin = this.getPlugin() - if (plugin === false) return false - return this.reply(await this.getLog(plugin)) - } -} \ No newline at end of file diff --git a/spaces/CjangCjengh/Shanghainese-TTS/models.py b/spaces/CjangCjengh/Shanghainese-TTS/models.py deleted file mode 100644 index 0a722b1a69fa5b5bd96da7cf225664df181cd027..0000000000000000000000000000000000000000 --- a/spaces/CjangCjengh/Shanghainese-TTS/models.py +++ /dev/null @@ -1,535 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - if self.n_vocab!=0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - if self.n_vocab!=0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/CofAI/chat/server/bp.py b/spaces/CofAI/chat/server/bp.py deleted file mode 100644 index 61d416797039dababd9e8222b4fc910ef65c40b9..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/server/bp.py +++ /dev/null @@ -1,6 +0,0 @@ -from flask import Blueprint - -bp = Blueprint('bp', __name__, - template_folder='./../client/html', - static_folder='./../client', - static_url_path='assets') diff --git a/spaces/CofAI/picscore/README.md b/spaces/CofAI/picscore/README.md deleted file mode 100644 index eb7b5aa70785ec02d3819c9905da86cdb950eaa4..0000000000000000000000000000000000000000 --- a/spaces/CofAI/picscore/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PicScore — Picture Generator with Stable Diffusion -emoji: 🖼 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.38.0 -app_file: picscore.py -pinned: true -license: mit ---- - -🖼 Generate pictures with the latest technology in PicScore now for free and unlimited! \ No newline at end of file diff --git a/spaces/Crossper6/stable-diffusion-webui/app.py b/spaces/Crossper6/stable-diffusion-webui/app.py deleted file mode 100644 index fdaa4ede9fdcb0600886acc080297057ca87c0b9..0000000000000000000000000000000000000000 --- a/spaces/Crossper6/stable-diffusion-webui/app.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - os.system(f"wget -q https://huggingface.co/MehjourneyClosedAI/OpenAnimeJourney/resolve/main/OpenAnimeJourney.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/OpenAnimeJourney.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - #os.system(f"wget -q https://huggingface.co/B2gan/NovelAI/resolve/main/model.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/novelai.ckpt") - os.system(f"wget --user Crossper6 --password pMRvyayxAP^Nv2$ -q https://huggingface.co/spaces/Crossper6/stable-diffusion-webui/resolve/main/novelai.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/novelai.ckpt") - os.system(f"wget --user Crossper6 --password pMRvyayxAP^Nv2$ -q https://huggingface.co/spaces/Crossper6/stable-diffusion-webui/raw/main/novelai.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/novelai.yaml") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - os.system(f"wget -q https://r2.kamiya-b.me/dreambooth_lib/akakura-sn.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/akakura-sn.ckpt") - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml") - - os.system(f"python launch.py --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --disable-safe-unpickle --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --precision full --no-half --api --skip-torch-cuda-test") \ No newline at end of file diff --git a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.8f423c0c.css b/spaces/DEEMOSTECH/ChatAvatar/static/css/main.8f423c0c.css deleted file mode 100644 index cc2a3787307b68eb3ca373a800bc8eb4018be1ad..0000000000000000000000000000000000000000 --- a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.8f423c0c.css +++ /dev/null @@ -1,2 +0,0 @@ -html{overflow-x:hidden;overflow-y:overlay}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;box-sizing:border-box;color:#cfcfcf;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;margin:0}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}::-webkit-scrollbar{height:16px;width:16px}::-webkit-scrollbar-thumb{border:5px solid;border-radius:8px}::-webkit-scrollbar-corner{display:none}.root{display:flex;justify-content:center;width:100%}.container{background:#121317;height:100vh;width:100%}.\!container{width:100%!important}@media (min-width:640px){.container{max-width:640px}.\!container{max-width:640px!important}}@media (min-width:768px){.container{max-width:768px}.\!container{max-width:768px!important}}@media (min-width:1024px){.container{max-width:1024px}.\!container{max-width:1024px!important}}@media (min-width:1280px){.container{max-width:1280px}.\!container{max-width:1280px!important}}@media (min-width:1536px){.container{max-width:1536px}.\!container{max-width:1536px!important}}.App{--theme-color:#4a00e0;--font-dark-color:#434343;--font-gray-color:#aaa;--font-light-color:#cfcfcf;--bg-light-color:#fff;--bg-gray0-color:#f8f8f8;--bg-gray1-color:#ececec;--bg-gray2-color:#7c7c7c;--bg-gray3-color:#373737;--bg-theme-color:#e7e3f1;--bg-dark-color:#121317;--side-gap:5rem;--radius:0.5rem;--shadow:-10px 0px 12px 1px hsla(0,0%,53%,.16);text-align:center}.App *{box-sizing:border-box;transition:all .3s}.App ::-webkit-scrollbar-thumb{background-color:rgba(0,0,0,.2)}textarea{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;border:1px solid transparent;color:var(--font-dark-color);font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;font-size:1rem;line-height:1.5rem;outline:none;padding:0;resize:none}textarea:focus{border-color:var(--theme-color)}img{-webkit-user-drag:none;-webkit-user-select:none;user-select:none}.gallery_con__Y2mej{align-items:flex-start;display:flex;justify-content:center;margin-top:var(--side-gap);padding:0 var(--side-gap);width:100%}.gallery_menuCon__fVdFJ{margin-right:2rem;width:-webkit-max-content;width:max-content}.gallery_menu__U2btD{align-items:center;background-color:initial;border:2px solid transparent;border-radius:1.5rem;cursor:pointer;display:flex;height:3rem;justify-content:center;line-height:1rem;margin-bottom:1rem;text-align:center;width:6rem}.gallery_menu__U2btD.gallery_selected__T2qcs,.gallery_menu__U2btD:hover{background-color:var(--bg-gray3-color);color:#fff}.gallery_menu__U2btD.gallery_selected__T2qcs{border-color:#fff}.gallery_cardsCon__wAfcp{align-items:flex-start;display:flex;flex-grow:1;flex-shrink:1;flex-wrap:wrap;justify-content:flex-start;max-height:100vh;max-width:calc(1600px + 9rem);overflow-y:auto}.gallery_cardsCon__wAfcp::-webkit-scrollbar-thumb{background-color:hsla(0,0%,100%,.2);border:5px solid #121317;border-radius:8px}.gallery_card__noUoL{background-color:var(--bg-gray3-color);border-radius:var(--radius);cursor:pointer;font-size:.75rem;height:260px;margin-bottom:1rem;margin-right:1rem;overflow:hidden;position:relative;width:200px}.gallery_coverImg__BYj-o,.gallery_coverImg__BYj-o img{height:100%;width:100%}.gallery_prompt__9PEmb{background-color:#f8f8f880;border-radius:var(--radius);bottom:1rem;color:var(--font-dark-color);height:0;left:1rem;overflow:hidden;padding:0 .5rem;position:absolute;right:1rem;text-align:left;white-space:pre-wrap;word-break:break-all}.gallery_prompt__9PEmb.gallery_show__c2k50{height:-webkit-fit-content;height:-moz-fit-content;height:fit-content;padding:.5rem}.gallery_infoCon__E8oLy{align-items:center;bottom:1rem;color:var(--font-dark-color);display:flex;justify-content:flex-start;left:1rem;position:absolute;right:1rem}.gallery_avatar__KWBmI,.gallery_avatar__KWBmI img{border-radius:12px;height:24px;overflow:hidden;width:24px}.gallery_avatar__KWBmI{margin-right:1rem}.gallery_spaceholder__xJwYU{flex-grow:1;flex-shrink:1}.header_con__M\+u1W{align-items:center;display:flex;justify-content:center;padding:0 var(--side-gap);width:100vw}.header_header__Y7CqP{align-items:center;border-bottom:1px solid hsla(0,0%,100%,.1);display:flex;justify-content:space-between;padding:1rem 0;width:100%}.header_logoCon__MIdGL{align-items:flex-start;display:flex;height:3rem;justify-content:center}.header_logo__90zuC{height:3rem;margin-right:1rem}.header_logoCon__MIdGL>div{font-size:2rem;font-weight:700;line-height:2rem;margin-top:5px}.header_avatar__B3zXB{background:var(--bg-gray2-color);border-radius:50%;overflow:hidden}.header_avatar__B3zXB,.header_avatar__B3zXB img{height:3rem;width:3rem}.login_con__\+RJgQ{background:#000;box-shadow:-5px 0 20px 0 hsla(0,0%,100%,.2);height:100vh;padding:var(--side-gap);position:fixed;right:0;top:0;z-index:9}.login_close__JulM-{cursor:pointer;-webkit-user-select:none;user-select:none}.result_con__gHOU1{align-items:center;color:var(--font-dark-color);display:flex;height:100vh;justify-content:center;position:fixed;top:0;width:0;width:100vw}.result_con__gHOU1 *{flex-shrink:0}.result_board__PCvVJ{align-items:center;background-color:var(--bg-light-color);border-radius:var(--radius);display:flex;height:80vh;justify-content:center;min-height:36rem;min-width:64rem;padding:1.5rem;width:100vh}.result_col__S-fRD{align-items:center;display:flex;flex-direction:column;flex-shrink:0;height:100%;justify-content:flex-start;position:relative;width:calc(50% - .5rem)}.result_col__S-fRD:first-child{margin-right:1rem}.result_colTitle__R8k\+A{align-items:flex-end;color:var(--font-gray-color);display:flex;font-size:1.2rem;font-weight:700;height:3rem;justify-content:space-between;line-height:1.2rem;margin-bottom:1rem;width:100%}.result_colTitle__R8k\+A>div{margin-bottom:.5rem}.result_colTitle__R8k\+A>div.result_restart__fLq8E{border-radius:5px;cursor:pointer;font-size:1rem;font-weight:400;margin-bottom:0;margin-left:1rem;padding:.5rem;-webkit-user-select:none;user-select:none}.result_restart__fLq8E:hover{background-color:var(--bg-gray0-color);color:var(--font-dark-color)}.result_spaceholder__GAxGZ{flex-grow:1;flex-shrink:1}.result_lang__85-De{cursor:pointer;font-weight:400;margin-right:1rem;-webkit-user-select:none;user-select:none}.result_lang__85-De.result_en__n-Jo7{margin-left:1rem;margin-right:0;width:4rem}.result_lang__85-De:hover{font-weight:700}.result_lang__85-De.result_selected__kDzD1{color:var(--font-dark-color);font-weight:700}.result_regene__yKazF{color:var(--theme-color);cursor:pointer;font-weight:400;-webkit-user-select:none;user-select:none}.result_chatCon__Hm\+zJ{background-color:var(--bg-gray0-color);border-radius:var(--radius);height:calc(100% - 4rem);padding:1rem}.result_chatCon__Hm\+zJ,.result_chatMsgCon__x8UTP{align-items:center;display:flex;flex-direction:column;flex-grow:1;flex-shrink:1;justify-content:flex-start;width:100%}.result_chatMsgCon__x8UTP{overflow-y:overlay;text-align:left}.result_chatMsgCon__x8UTP::-webkit-scrollbar-thumb{border:none;border-radius:3px}.result_chatMsgCon__x8UTP::-webkit-scrollbar{width:6px}.result_chatMsgRow__dr9Qg{align-items:flex-start;display:flex;flex-direction:row;justify-content:flex-start;margin-bottom:1rem;width:100%}.result_chatMsgRow__dr9Qg.result_user__bUuRg{flex-direction:row-reverse}.result_avatar__B2zOp{background:var(--bg-gray2-color);border-radius:1.5rem;margin-left:0;margin-right:1rem;overflow:hidden}.result_avatar__B2zOp,.result_avatar__B2zOp img{height:3rem;width:3rem}.result_user__bUuRg .result_avatar__B2zOp{margin-left:1rem;margin-right:0}.result_bubble__GexXm{background:var(--bg-theme-color);border-radius:var(--radius);flex-shrink:1;line-height:1.5rem;padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_bubble__GexXm.result_unactive__zyVF2{background:var(--bg-gray1-color)}.result_user__bUuRg .result_bubble__GexXm{background:var(--bg-light-color)}.result_chatIptCon__LXDF-{align-items:center;display:flex;flex-direction:column;justify-content:flex-start;width:100%}.result_chatTipsCon__w4uUf{align-items:flex-end;display:flex;flex-direction:row;justify-content:flex-start;margin-top:1rem;max-width:100%;overflow-x:auto;overflow-y:hidden;width:100%}.result_chatTipsCon__w4uUf::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_chatTips__6b9zJ{background:var(--bg-light-color);border-radius:var(--radius);cursor:pointer;margin-right:1rem;padding:1rem;text-align:left;white-space:pre-wrap;width:15.5rem;word-break:break-all}.result_chatTips__6b9zJ:last-child{margin-right:0}.result_chatRowCon__jLGk3{align-items:flex-start;display:flex;flex-direction:row;justify-content:space-between;margin-top:1rem;width:100%}.result_iptLineCon__nLuWa{flex-grow:1;flex-shrink:1;line-height:1.5rem;margin-right:1rem;position:relative;text-align:left}.result_iptSpaceholder__hAkD5{border:1px solid transparent;max-height:calc(9rem + 2px);visibility:hidden}.result_iptSpaceholder__hAkD5,.result_ipt__tA\+g4{padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_ipt__tA\+g4{background:var(--bg-light-color);border-radius:var(--radius);bottom:0;left:0;overflow-y:auto;position:absolute;right:0;top:0}.result_ipt__tA\+g4::-webkit-scrollbar-thumb{border-color:var(--bg-light-color)}.result_btn__h5tQr{align-items:center;background-color:var(--theme-color);border:1px solid var(--theme-color);border-radius:1.5rem;color:#fff;cursor:pointer;display:flex;font-weight:700;height:calc(3rem - 2px);justify-content:center;line-height:1rem;padding:0 1.5rem;-webkit-user-select:none;user-select:none}.result_btn__h5tQr:hover{background:transparent;color:var(--theme-color)}.result_con__gHOU1 .result_btn__h5tQr.result_disabled__lB61-{background:var(--bg-gray2-color);border-color:var(--bg-gray2-color);color:var(--font-light-color);cursor:not-allowed}.result_iptArea__23TZc{background:var(--bg-gray0-color);border-radius:var(--radius);height:12rem;margin-bottom:1rem;padding:1rem;text-align:left;width:100%}.result_iptArea__23TZc::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_generateBtn__UGmBG{margin-bottom:1rem;width:100%}.result_candidateCon__x9kyB{align-items:flex-start;background-color:var(--bg-gray0-color);border-radius:var(--radius);display:flex;flex-direction:row;flex-grow:1;flex-shrink:1;justify-content:space-between;overflow-y:overlay;padding:1rem;position:relative;width:100%}.result_candidateCon__x9kyB::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_candidateCol__eoHna{margin-right:1rem;position:relative;width:calc(33.33333% - .66667rem)}.result_candidateCol__eoHna:last-child{margin-right:0}.result_candidateCol__eoHna img{border-radius:var(--radius);cursor:pointer;margin-bottom:1rem;width:100%}.result_creatorCon__tIm3e{align-items:flex-end;display:flex;justify-content:flex-start;margin-bottom:1rem;width:100%}.result_creatorInfoCon__pET8h{text-align:left}.result_creatorName__VLTXL{color:var(--font-dark-color);font-size:1.2rem;font-weight:700;line-height:1.8rem}.result_creatorInfo__CkbWU{color:var(--font-gray-color);font-size:1rem;line-height:1.2rem}.result_modelView__Y25w5{background:var(--bg-gray0-color);border-radius:var(--radius);flex-grow:1;flex-shrink:1;overflow:hidden;width:100%}.result_modelInfoCon__bXw5O{align-items:center;bottom:1rem;display:flex;flex-direction:column;justify-content:flex-end;left:1rem;position:absolute;right:1rem;text-align:left}.result_progressInfo__g9iwR{margin-bottom:.5rem;width:100%}.result_progressTrack__I6zDn{background:var(--bg-light-color);border-radius:2px;height:4px;position:relative;width:100%}.result_progressThumb__mbBQj{background-color:var(--theme-color);border-radius:2px;height:4px;left:0;position:absolute;top:0}.result_modelPrompt__DzUbD{background:var(--bg-light-color);border-radius:var(--radius);margin-top:1rem;min-height:3rem;padding:1rem;width:100%}.welcome_con__o1kmf{align-items:center;display:flex;flex-direction:column;justify-content:flex-start;padding-top:calc(50vh - 18rem);position:relative;width:100%}.welcome_con__o1kmf>img{position:absolute;top:4rem;width:50vw}.welcome_mainCon__H1gv\+{z-index:999}.welcome_title__Gd8m4{color:#fff;font-family:Courier New;font-size:5rem;font-weight:700;line-height:5rem;margin-bottom:1.5rem}.welcome_ioCon__PQZXU{background-color:#fff;border-radius:1rem;border-style:solid;margin-left:8rem;margin-right:8rem;margin-top:24rem;padding:2rem;width:calc(100% - 16rem)}.welcome_iptCon__KpWEL{align-items:center;background:#ededf2;border-radius:1rem;display:flex;height:4rem;justify-content:space-between;margin-bottom:2rem;width:100%}.welcome_iptCon__KpWEL>img{height:2rem;margin-right:1rem;position:static;width:2rem}.welcome_ipt__ayi9Z{background:#ededf2;border:none;border-radius:1rem;color:var(--font-dark-color);flex-grow:1;font-size:1rem;height:100%;outline:none;padding:0 2rem}.welcome_ipt__ayi9Z::-webkit-input-placeholder{font-size:1rem}.welcome_ipt__ayi9Z::placeholder{font-size:1rem}.welcome_btnCon__Mx-ta,.welcome_btn__jCuoG{align-items:center;display:flex;justify-content:center}.welcome_btn__jCuoG{border:1px solid #8f8f8f;border-radius:1rem;cursor:pointer;height:3rem;line-height:1rem;-webkit-user-select:none;user-select:none;width:100%}.welcome_btn__jCuoG:last-child{background:#4a00e0;border:none;font-weight:700}.welcome_btn__jCuoG.welcome_disabled__pcSzv{cursor:not-allowed}.welcome_btn__jCuoG:hover{color:#fff} -/*# sourceMappingURL=main.8f423c0c.css.map*/ \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/buffered.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/buffered.py deleted file mode 100644 index 11474c16a988d0e1c50be2637b14438985bcfbc9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/buffered.py +++ /dev/null @@ -1,118 +0,0 @@ -from __future__ import annotations - -from dataclasses import dataclass, field -from typing import Any, Callable, Mapping - -from .. import ClosedResourceError, DelimiterNotFound, EndOfStream, IncompleteRead -from ..abc import AnyByteReceiveStream, ByteReceiveStream - - -@dataclass(eq=False) -class BufferedByteReceiveStream(ByteReceiveStream): - """ - Wraps any bytes-based receive stream and uses a buffer to provide sophisticated receiving - capabilities in the form of a byte stream. - """ - - receive_stream: AnyByteReceiveStream - _buffer: bytearray = field(init=False, default_factory=bytearray) - _closed: bool = field(init=False, default=False) - - async def aclose(self) -> None: - await self.receive_stream.aclose() - self._closed = True - - @property - def buffer(self) -> bytes: - """The bytes currently in the buffer.""" - return bytes(self._buffer) - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return self.receive_stream.extra_attributes - - async def receive(self, max_bytes: int = 65536) -> bytes: - if self._closed: - raise ClosedResourceError - - if self._buffer: - chunk = bytes(self._buffer[:max_bytes]) - del self._buffer[:max_bytes] - return chunk - elif isinstance(self.receive_stream, ByteReceiveStream): - return await self.receive_stream.receive(max_bytes) - else: - # With a bytes-oriented object stream, we need to handle any surplus bytes we get from - # the receive() call - chunk = await self.receive_stream.receive() - if len(chunk) > max_bytes: - # Save the surplus bytes in the buffer - self._buffer.extend(chunk[max_bytes:]) - return chunk[:max_bytes] - else: - return chunk - - async def receive_exactly(self, nbytes: int) -> bytes: - """ - Read exactly the given amount of bytes from the stream. - - :param nbytes: the number of bytes to read - :return: the bytes read - :raises ~anyio.IncompleteRead: if the stream was closed before the requested - amount of bytes could be read from the stream - - """ - while True: - remaining = nbytes - len(self._buffer) - if remaining <= 0: - retval = self._buffer[:nbytes] - del self._buffer[:nbytes] - return bytes(retval) - - try: - if isinstance(self.receive_stream, ByteReceiveStream): - chunk = await self.receive_stream.receive(remaining) - else: - chunk = await self.receive_stream.receive() - except EndOfStream as exc: - raise IncompleteRead from exc - - self._buffer.extend(chunk) - - async def receive_until(self, delimiter: bytes, max_bytes: int) -> bytes: - """ - Read from the stream until the delimiter is found or max_bytes have been read. - - :param delimiter: the marker to look for in the stream - :param max_bytes: maximum number of bytes that will be read before raising - :exc:`~anyio.DelimiterNotFound` - :return: the bytes read (not including the delimiter) - :raises ~anyio.IncompleteRead: if the stream was closed before the delimiter - was found - :raises ~anyio.DelimiterNotFound: if the delimiter is not found within the - bytes read up to the maximum allowed - - """ - delimiter_size = len(delimiter) - offset = 0 - while True: - # Check if the delimiter can be found in the current buffer - index = self._buffer.find(delimiter, offset) - if index >= 0: - found = self._buffer[:index] - del self._buffer[: index + len(delimiter) :] - return bytes(found) - - # Check if the buffer is already at or over the limit - if len(self._buffer) >= max_bytes: - raise DelimiterNotFound(max_bytes) - - # Read more data into the buffer from the socket - try: - data = await self.receive_stream.receive() - except EndOfStream as exc: - raise IncompleteRead from exc - - # Move the offset forward and add the new data to the buffer - offset = max(len(self._buffer) - delimiter_size + 1, 0) - self._buffer.extend(data) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-82224c86.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-82224c86.js deleted file mode 100644 index fff7142988e1d6fad4e671ed9c52ddb7f18fb4e8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-82224c86.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as O,e as P,s as Q,N as T,k as N,O as R,K as g,U,p as C,o as B,M as z,ap as A,Q as j,aw as G,z as q,v as E,A as D,x as S,a1 as X,B as Y,am as Z,P as y,R as x,a7 as p,E as $,ae as ee,h as F,j as K,q as ne,r as ie,t as M,F as k}from"./index-3370be2a.js";/* empty css */import{B as le}from"./Button-89624748.js";import{B as ae}from"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";function ue(n){let e;return{c(){e=y(n[4])},m(i,l){C(i,e,l)},p(i,l){l&16&&x(e,i[4])},d(i){i&&D(e)}}}function te(n){let e,i,l,t,s,b,d;return i=new ae({props:{show_label:n[6],info:n[5],$$slots:{default:[ue]},$$scope:{ctx:n}}}),{c(){e=T("label"),N(i.$$.fragment),l=R(),t=T("input"),g(t,"type","number"),g(t,"min",n[1]),g(t,"max",n[2]),t.disabled=n[3],g(t,"class","svelte-gigvtq"),g(e,"class","block svelte-gigvtq"),U(e,"container",n[7])},m(m,_){C(m,e,_),B(i,e,null),z(e,l),z(e,t),A(t,n[0]),s=!0,b||(d=[j(t,"input",n[11]),j(t,"keypress",n[8]),j(t,"blur",n[9])],b=!0)},p(m,[_]){const r={};_&64&&(r.show_label=m[6]),_&32&&(r.info=m[5]),_&16400&&(r.$$scope={dirty:_,ctx:m}),i.$set(r),(!s||_&2)&&g(t,"min",m[1]),(!s||_&4)&&g(t,"max",m[2]),(!s||_&8)&&(t.disabled=m[3]),_&1&&G(t.value)!==m[0]&&A(t,m[0]),(!s||_&128)&&U(e,"container",m[7])},i(m){s||(q(i.$$.fragment,m),s=!0)},o(m){E(i.$$.fragment,m),s=!1},d(m){m&&D(e),S(i),b=!1,X(d)}}}function se(n,e,i){let{value:l=0}=e,{minimum:t=void 0}=e,{maximum:s=void 0}=e,{value_is_output:b=!1}=e,{disabled:d=!1}=e,{label:m}=e,{info:_=void 0}=e,{show_label:r=!0}=e,{container:h=!0}=e;const u=Y();function o(){!isNaN(l)&&l!==null&&(u("change",l),b||u("input"))}Z(()=>{i(10,b=!1)});async function w(f){await p(),f.key==="Enter"&&(f.preventDefault(),u("submit"))}function c(f){u("blur")}function v(){l=G(this.value),i(0,l)}return n.$$set=f=>{"value"in f&&i(0,l=f.value),"minimum"in f&&i(1,t=f.minimum),"maximum"in f&&i(2,s=f.maximum),"value_is_output"in f&&i(10,b=f.value_is_output),"disabled"in f&&i(3,d=f.disabled),"label"in f&&i(4,m=f.label),"info"in f&&i(5,_=f.info),"show_label"in f&&i(6,r=f.show_label),"container"in f&&i(7,h=f.container)},n.$$.update=()=>{n.$$.dirty&1&&o()},[l,t,s,d,m,_,r,h,w,c,b,v]}class me extends O{constructor(e){super(),P(this,e,se,te,Q,{value:0,minimum:1,maximum:2,value_is_output:10,disabled:3,label:4,info:5,show_label:6,container:7})}}function fe(n){let e,i,l,t,s,b;const d=[n[13]];let m={};for(let u=0;uK(l,"value",_)),F.push(()=>K(l,"value_is_output",r)),l.$on("change",n[17]),l.$on("input",n[18]),l.$on("submit",n[19]),l.$on("blur",n[20]),{c(){N(e.$$.fragment),i=R(),N(l.$$.fragment)},m(u,o){B(e,u,o),C(u,i,o),B(l,u,o),b=!0},p(u,o){const w=o&8192?ne(d,[ie(u[13])]):{};e.$set(w);const c={};o&4&&(c.label=u[2]),o&8&&(c.info=u[3]),o&1024&&(c.show_label=u[10]),o&2048&&(c.minimum=u[11]),o&4096&&(c.maximum=u[12]),o&128&&(c.container=u[7]),o&16384&&(c.disabled=u[14]==="static"),!t&&o&1&&(t=!0,c.value=u[0],M(()=>t=!1)),!s&&o&2&&(s=!0,c.value_is_output=u[1],M(()=>s=!1)),l.$set(c)},i(u){b||(q(e.$$.fragment,u),q(l.$$.fragment,u),b=!0)},o(u){E(e.$$.fragment,u),E(l.$$.fragment,u),b=!1},d(u){u&&D(i),S(e,u),S(l,u)}}}function _e(n){let e,i;return e=new le({props:{visible:n[6],elem_id:n[4],elem_classes:n[5],padding:n[7],allow_overflow:!1,scale:n[8],min_width:n[9],$$slots:{default:[fe]},$$scope:{ctx:n}}}),{c(){N(e.$$.fragment)},m(l,t){B(e,l,t),i=!0},p(l,[t]){const s={};t&64&&(s.visible=l[6]),t&16&&(s.elem_id=l[4]),t&32&&(s.elem_classes=l[5]),t&128&&(s.padding=l[7]),t&256&&(s.scale=l[8]),t&512&&(s.min_width=l[9]),t&2129039&&(s.$$scope={dirty:t,ctx:l}),e.$set(s)},i(l){i||(q(e.$$.fragment,l),i=!0)},o(l){E(e.$$.fragment,l),i=!1},d(l){S(e,l)}}}function oe(n,e,i){let{label:l="Number"}=e,{info:t=void 0}=e,{elem_id:s=""}=e,{elem_classes:b=[]}=e,{visible:d=!0}=e,{container:m=!0}=e,{scale:_=null}=e,{min_width:r=void 0}=e,{value:h=0}=e,{show_label:u}=e,{minimum:o=void 0}=e,{maximum:w=void 0}=e,{loading_status:c}=e,{mode:v}=e,{value_is_output:f=!1}=e;function H(a){h=a,i(0,h)}function I(a){f=a,i(1,f)}function J(a){k.call(this,n,a)}function L(a){k.call(this,n,a)}function V(a){k.call(this,n,a)}function W(a){k.call(this,n,a)}return n.$$set=a=>{"label"in a&&i(2,l=a.label),"info"in a&&i(3,t=a.info),"elem_id"in a&&i(4,s=a.elem_id),"elem_classes"in a&&i(5,b=a.elem_classes),"visible"in a&&i(6,d=a.visible),"container"in a&&i(7,m=a.container),"scale"in a&&i(8,_=a.scale),"min_width"in a&&i(9,r=a.min_width),"value"in a&&i(0,h=a.value),"show_label"in a&&i(10,u=a.show_label),"minimum"in a&&i(11,o=a.minimum),"maximum"in a&&i(12,w=a.maximum),"loading_status"in a&&i(13,c=a.loading_status),"mode"in a&&i(14,v=a.mode),"value_is_output"in a&&i(1,f=a.value_is_output)},[h,f,l,t,s,b,d,m,_,r,u,o,w,c,v,H,I,J,L,V,W]}class be extends O{constructor(e){super(),P(this,e,oe,_e,Q,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,container:7,scale:8,min_width:9,value:0,show_label:10,minimum:11,maximum:12,loading_status:13,mode:14,value_is_output:1})}}const we=be,ve=["static","dynamic"],ke=n=>({type:{payload:"number"},description:{payload:"numeric value"},example_data:n.value??1});export{we as Component,ke as document,ve as modes}; -//# sourceMappingURL=index-82224c86.js.map diff --git a/spaces/Datasculptor/MusicGen/audiocraft/modules/activations.py b/spaces/Datasculptor/MusicGen/audiocraft/modules/activations.py deleted file mode 100644 index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/Detomo/ai-comic-generation/src/lib/replaceTextInSpeechBubbles.ts b/spaces/Detomo/ai-comic-generation/src/lib/replaceTextInSpeechBubbles.ts deleted file mode 100644 index 8566a2f8068feef008348ae7f6d6f06e2d2b1628..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/lib/replaceTextInSpeechBubbles.ts +++ /dev/null @@ -1,98 +0,0 @@ -"use client" - -import { createWorker } from "tesseract.js" -import { loadImageToCanvas } from "./loadImageToCanvas"; - -export async function replaceTextInSpeechBubbles(image: string, customText: string) { - console.log('creating OCR worker to find bubbles inside', image); - - const worker = await createWorker({ - logger: (info) => { - console.log(info) - }, - }); - - const canvas = await loadImageToCanvas(image) - - const ctx = canvas.getContext('2d')!; - - try { - await worker.load(); - await worker.loadLanguage('eng'); - await worker.initialize('eng'); - - const { data } = await worker.recognize(canvas); - const lines = data.lines || []; - - // Draw the lines on the image - ctx.fillStyle = "white"; - - lines.forEach((line) => { - ctx.fillRect(line.bbox.x0, line.bbox.y0, line.bbox.x1 - line.bbox.x0, line.bbox.y1 - line.bbox.y0); - - const bubbleWidth = line.bbox.x1 - line.bbox.x0; - const bubbleHeight = line.bbox.y1 - line.bbox.y0; - let fontSize = 18; - ctx.font = `${fontSize}px Arial`; - - /* - while ( - ctx.measureText(customText).width > bubbleWidth || fontSize * 1.2 // line height - > bubbleHeight) { - fontSize -= 1; - ctx.font = `${fontSize}px Arial`; - } - - const lines = wrapText(ctx, customText, line.bbox.x0, line.bbox.y0, bubbleWidth, fontSize); - - ctx.fillStyle = "black"; - lines.forEach((text, i) => { - ctx.fillText(text, line.bbox.x0, line.bbox.y0 + (i * fontSize * 1.2)); - }); - */ - }) - - await worker.terminate(); - - // Convert the Canvas to image data - const imgAsDataURL = canvas.toDataURL('image/png'); - - if (typeof window !== "undefined") { - const foo = (window as any) - if (!foo.debugJujul) { - foo.debugJujul = [] - } - foo.debugJujul.push({ - lines - }) - } - console.log("lines:", lines) - - return imgAsDataURL; - - } catch (err) { - console.error(err); - } - return ""; -} - -function wrapText(context: CanvasRenderingContext2D, text: string, x: number, y: number, maxWidth: number, lineHeight: number) { - const words = text.split(' '); - let line = ''; - const lines = []; - - for(let n = 0; n < words.length; n++) { - let testLine = line + words[n] + ' '; - let metrics = context.measureText(testLine); - let testWidth = metrics.width; - if (testWidth > maxWidth && n > 0) { - lines.push(line); - line = words[n] + ' '; - } - else { - line = testLine; - } - } - lines.push(line); - return lines; -} \ No newline at end of file diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/models/inspurai.py b/spaces/Dorado607/ChuanhuChatGPT/modules/models/inspurai.py deleted file mode 100644 index c590859fa7717d032290ccc490d22f4494541576..0000000000000000000000000000000000000000 --- a/spaces/Dorado607/ChuanhuChatGPT/modules/models/inspurai.py +++ /dev/null @@ -1,345 +0,0 @@ -# 代码主要来源于 https://github.com/Shawn-Inspur/Yuan-1.0/blob/main/yuan_api/inspurai.py - -import hashlib -import json -import os -import time -import uuid -from datetime import datetime - -import pytz -import requests - -from modules.presets import NO_APIKEY_MSG -from modules.models.base_model import BaseLLMModel - - -class Example: - """ store some examples(input, output pairs and formats) for few-shots to prime the model.""" - - def __init__(self, inp, out): - self.input = inp - self.output = out - self.id = uuid.uuid4().hex - - def get_input(self): - """return the input of the example.""" - return self.input - - def get_output(self): - """Return the output of the example.""" - return self.output - - def get_id(self): - """Returns the unique ID of the example.""" - return self.id - - def as_dict(self): - return { - "input": self.get_input(), - "output": self.get_output(), - "id": self.get_id(), - } - - -class Yuan: - """The main class for a user to interface with the Inspur Yuan API. - A user can set account info and add examples of the API request. - """ - - def __init__(self, - engine='base_10B', - temperature=0.9, - max_tokens=100, - input_prefix='', - input_suffix='\n', - output_prefix='答:', - output_suffix='\n\n', - append_output_prefix_to_query=False, - topK=1, - topP=0.9, - frequencyPenalty=1.2, - responsePenalty=1.2, - noRepeatNgramSize=2): - - self.examples = {} - self.engine = engine - self.temperature = temperature - self.max_tokens = max_tokens - self.topK = topK - self.topP = topP - self.frequencyPenalty = frequencyPenalty - self.responsePenalty = responsePenalty - self.noRepeatNgramSize = noRepeatNgramSize - self.input_prefix = input_prefix - self.input_suffix = input_suffix - self.output_prefix = output_prefix - self.output_suffix = output_suffix - self.append_output_prefix_to_query = append_output_prefix_to_query - self.stop = (output_suffix + input_prefix).strip() - self.api = None - - # if self.engine not in ['base_10B','translate','dialog']: - # raise Exception('engine must be one of [\'base_10B\',\'translate\',\'dialog\'] ') - def set_account(self, api_key): - account = api_key.split('||') - self.api = YuanAPI(user=account[0], phone=account[1]) - - def add_example(self, ex): - """Add an example to the object. - Example must be an instance of the Example class.""" - assert isinstance(ex, Example), "Please create an Example object." - self.examples[ex.get_id()] = ex - - def delete_example(self, id): - """Delete example with the specific id.""" - if id in self.examples: - del self.examples[id] - - def get_example(self, id): - """Get a single example.""" - return self.examples.get(id, None) - - def get_all_examples(self): - """Returns all examples as a list of dicts.""" - return {k: v.as_dict() for k, v in self.examples.items()} - - def get_prime_text(self): - """Formats all examples to prime the model.""" - return "".join( - [self.format_example(ex) for ex in self.examples.values()]) - - def get_engine(self): - """Returns the engine specified for the API.""" - return self.engine - - def get_temperature(self): - """Returns the temperature specified for the API.""" - return self.temperature - - def get_max_tokens(self): - """Returns the max tokens specified for the API.""" - return self.max_tokens - - def craft_query(self, prompt): - """Creates the query for the API request.""" - q = self.get_prime_text( - ) + self.input_prefix + prompt + self.input_suffix - if self.append_output_prefix_to_query: - q = q + self.output_prefix - - return q - - def format_example(self, ex): - """Formats the input, output pair.""" - return self.input_prefix + ex.get_input( - ) + self.input_suffix + self.output_prefix + ex.get_output( - ) + self.output_suffix - - def response(self, - query, - engine='base_10B', - max_tokens=20, - temperature=0.9, - topP=0.1, - topK=1, - frequencyPenalty=1.0, - responsePenalty=1.0, - noRepeatNgramSize=0): - """Obtains the original result returned by the API.""" - - if self.api is None: - return NO_APIKEY_MSG - try: - # requestId = submit_request(query,temperature,topP,topK,max_tokens, engine) - requestId = self.api.submit_request(query, temperature, topP, topK, max_tokens, engine, frequencyPenalty, - responsePenalty, noRepeatNgramSize) - response_text = self.api.reply_request(requestId) - except Exception as e: - raise e - - return response_text - - def del_special_chars(self, msg): - special_chars = ['', '', '#', '▃', '▁', '▂', ' '] - for char in special_chars: - msg = msg.replace(char, '') - return msg - - def submit_API(self, prompt, trun=[]): - """Submit prompt to yuan API interface and obtain an pure text reply. - :prompt: Question or any content a user may input. - :return: pure text response.""" - query = self.craft_query(prompt) - res = self.response(query, engine=self.engine, - max_tokens=self.max_tokens, - temperature=self.temperature, - topP=self.topP, - topK=self.topK, - frequencyPenalty=self.frequencyPenalty, - responsePenalty=self.responsePenalty, - noRepeatNgramSize=self.noRepeatNgramSize) - if 'resData' in res and res['resData'] != None: - txt = res['resData'] - else: - txt = '模型返回为空,请尝试修改输入' - # 单独针对翻译模型的后处理 - if self.engine == 'translate': - txt = txt.replace(' ##', '').replace(' "', '"').replace(": ", ":").replace(" ,", ",") \ - .replace('英文:', '').replace('文:', '').replace("( ", "(").replace(" )", ")") - else: - txt = txt.replace(' ', '') - txt = self.del_special_chars(txt) - - # trun多结束符截断模型输出 - if isinstance(trun, str): - trun = [trun] - try: - if trun != None and isinstance(trun, list) and trun != []: - for tr in trun: - if tr in txt and tr != "": - txt = txt[:txt.index(tr)] - else: - continue - except: - return txt - return txt - - -class YuanAPI: - ACCOUNT = '' - PHONE = '' - - SUBMIT_URL = "http://api.airyuan.cn:32102/v1/interface/api/infer/getRequestId?" - REPLY_URL = "http://api.airyuan.cn:32102/v1/interface/api/result?" - - def __init__(self, user, phone): - self.ACCOUNT = user - self.PHONE = phone - - @staticmethod - def code_md5(str): - code = str.encode("utf-8") - m = hashlib.md5() - m.update(code) - result = m.hexdigest() - return result - - @staticmethod - def rest_get(url, header, timeout, show_error=False): - '''Call rest get method''' - try: - response = requests.get(url, headers=header, timeout=timeout, verify=False) - return response - except Exception as exception: - if show_error: - print(exception) - return None - - def header_generation(self): - """Generate header for API request.""" - t = datetime.now(pytz.timezone("Asia/Shanghai")).strftime("%Y-%m-%d") - token = self.code_md5(self.ACCOUNT + self.PHONE + t) - headers = {'token': token} - return headers - - def submit_request(self, query, temperature, topP, topK, max_tokens, engine, frequencyPenalty, responsePenalty, - noRepeatNgramSize): - """Submit query to the backend server and get requestID.""" - headers = self.header_generation() - # url=SUBMIT_URL + "account={0}&data={1}&temperature={2}&topP={3}&topK={4}&tokensToGenerate={5}&type={6}".format(ACCOUNT,query,temperature,topP,topK,max_tokens,"api") - # url=SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \ - # "&type={7}".format(engine,ACCOUNT,query,temperature,topP,topK, max_tokens,"api") - url = self.SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \ - "&type={7}&frequencyPenalty={8}&responsePenalty={9}&noRepeatNgramSize={10}". \ - format(engine, self.ACCOUNT, query, temperature, topP, topK, max_tokens, "api", frequencyPenalty, - responsePenalty, noRepeatNgramSize) - response = self.rest_get(url, headers, 30) - response_text = json.loads(response.text) - if response_text["flag"]: - requestId = response_text["resData"] - return requestId - else: - raise RuntimeWarning(response_text) - - def reply_request(self, requestId, cycle_count=5): - """Check reply API to get the inference response.""" - url = self.REPLY_URL + "account={0}&requestId={1}".format(self.ACCOUNT, requestId) - headers = self.header_generation() - response_text = {"flag": True, "resData": None} - for i in range(cycle_count): - response = self.rest_get(url, headers, 30, show_error=True) - response_text = json.loads(response.text) - if response_text["resData"] is not None: - return response_text - if response_text["flag"] is False and i == cycle_count - 1: - raise RuntimeWarning(response_text) - time.sleep(3) - return response_text - - -class Yuan_Client(BaseLLMModel): - - def __init__(self, model_name, api_key, user_name="", system_prompt=None): - super().__init__(model_name=model_name, user=user_name) - self.history = [] - self.api_key = api_key - self.system_prompt = system_prompt - - self.input_prefix = "" - self.output_prefix = "" - - def set_text_prefix(self, option, value): - if option == 'input_prefix': - self.input_prefix = value - elif option == 'output_prefix': - self.output_prefix = value - - def get_answer_at_once(self): - # yuan temperature is (0,1] and base model temperature is [0,2], and yuan 0.9 == base 1 so need to convert - temperature = self.temperature if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10 - topP = self.top_p - topK = self.n_choices - # max_tokens should be in [1,200] - max_tokens = self.max_generation_token if self.max_generation_token is not None else 50 - if max_tokens > 200: - max_tokens = 200 - stop = self.stop_sequence if self.stop_sequence is not None else [] - examples = [] - system_prompt = self.system_prompt - if system_prompt is not None: - lines = system_prompt.splitlines() - # TODO: support prefixes in system prompt or settings - """ - if lines[0].startswith('-'): - prefixes = lines.pop()[1:].split('|') - self.input_prefix = prefixes[0] - if len(prefixes) > 1: - self.output_prefix = prefixes[1] - if len(prefixes) > 2: - stop = prefixes[2].split(',') - """ - for i in range(0, len(lines), 2): - in_line = lines[i] - out_line = lines[i + 1] if i + 1 < len(lines) else "" - examples.append((in_line, out_line)) - yuan = Yuan(engine=self.model_name.replace('yuanai-1.0-', ''), - temperature=temperature, - max_tokens=max_tokens, - topK=topK, - topP=topP, - input_prefix=self.input_prefix, - input_suffix="", - output_prefix=self.output_prefix, - output_suffix="".join(stop), - ) - if not self.api_key: - return NO_APIKEY_MSG, 0 - yuan.set_account(self.api_key) - - for in_line, out_line in examples: - yuan.add_example(Example(inp=in_line, out=out_line)) - - prompt = self.history[-1]["content"] - answer = yuan.submit_API(prompt, trun=stop) - return answer, len(answer) diff --git a/spaces/EsoCode/text-generation-webui/extensions/multimodal/abstract_pipeline.py b/spaces/EsoCode/text-generation-webui/extensions/multimodal/abstract_pipeline.py deleted file mode 100644 index 584219419d256e7743fd4d5120c56bcfa8f2a9f9..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/multimodal/abstract_pipeline.py +++ /dev/null @@ -1,62 +0,0 @@ -from abc import ABC, abstractmethod -from typing import List, Optional - -import torch -from PIL import Image - - -class AbstractMultimodalPipeline(ABC): - @staticmethod - @abstractmethod - def name() -> str: - 'name of the pipeline, should be same as in --multimodal-pipeline' - pass - - @staticmethod - @abstractmethod - def image_start() -> Optional[str]: - 'return image start string, string representation of image start token, or None if not applicable' - pass - - @staticmethod - @abstractmethod - def image_end() -> Optional[str]: - 'return image end string, string representation of image end token, or None if not applicable' - pass - - @staticmethod - @abstractmethod - def placeholder_token_id() -> int: - 'return placeholder token id' - pass - - @staticmethod - @abstractmethod - def num_image_embeds() -> int: - 'return the number of embeds used by a single image (for example: 256 for LLaVA)' - pass - - @abstractmethod - def embed_images(self, images: List[Image.Image]) -> torch.Tensor: - 'forward the images through vision pipeline, and return their embeddings' - pass - - @staticmethod - @abstractmethod - def embed_tokens(input_ids: torch.Tensor) -> torch.Tensor: - 'embed tokens, the exact function varies by LLM, for LLaMA it is `shared.model.model.embed_tokens`' - pass - - @staticmethod - @abstractmethod - def placeholder_embeddings() -> torch.Tensor: - 'get placeholder embeddings if there are multiple images, and `add_all_images_to_prompt` is False' - pass - - def _get_device(self, setting_name: str, params: dict): - if params[setting_name] is None: - return torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - return torch.device(params[setting_name]) - - def _get_dtype(self, setting_name: str, params: dict): - return torch.float32 if int(params[setting_name]) == 32 else torch.float16 diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/upfirdn2d/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/upfirdn2d/__init__.py deleted file mode 100644 index 397e85bea063e97fc4c12ad4d3e15669b69290bd..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/upfirdn2d/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .upfirdn2d import upfirdn2d - -__all__ = ['upfirdn2d'] diff --git a/spaces/Fisharp/starcoder-playground/README.md b/spaces/Fisharp/starcoder-playground/README.md deleted file mode 100644 index 0b9a70b231aac0dc0911d1173b8de0cc05b1cd4d..0000000000000000000000000000000000000000 --- a/spaces/Fisharp/starcoder-playground/README.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: StarCoder Demo -emoji: 💫 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: true -duplicated_from: bigcode/bigcode-playground ---- - - -# ⭐StarCoder Demo💫 - -## Code-Completion Playground 💻 with ⭐StarCoder Models - -This is a demo playground to generate code with the power of ⭐[StarCoder](https://huggingface.co/bigcode/starcoder) a **15B** parameter model for code generation in **80+** programming languages. - -ℹ️ This is not an instruction model but just a code completion tool. - -🗣️For instruction and chatting you can chat with a prompted version of the model directly at the [HuggingFace🤗Chat💬(hf.co/chat)](https://huggingface.co/chat/?model=starcoder) - ---- - -**Intended Use**: this app and its [supporting model](https://huggingface.co/bigcode/starcoder) are provided for demonstration purposes only; not to serve as a replacement for human expertise. For more details on the model's limitations in terms of factuality and biases, please refer to the source [model card](hf.co/bigcode) - -⚠️ Any use or sharing of this demo constitutes your acceptance of the BigCode [OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) License Agreement and the use restrictions included within. - ---- - -## Model Formats - -The model is pretrained on code and is formatted with special tokens in addition to the pure code data,\ -such as prefixes specifying the source of the file or tokens separating code from a commit message.\ -Use these templates to explore the model's capacities: - -### 1. Prefixes 🏷️ - -For pure code files, use any combination of the following prefixes: - -```xml -REPONAMEFILENAMESTARS\ncode<|endoftext|> -``` - -STARS can be one of: 0, 1-10, 10-100, 100-1000, 1000+ - -### 2. Commits 💾 - -The commits data is formatted as follows: - -```xml -codetextcode<|endoftext|> -``` - -### 3. Jupyter Notebooks 📓 - -The model is trained on Jupyter notebooks as Python scripts and structured formats like: - -```xml -textcodeoutput -``` - -### 4. Issues 🐛 - -We also trained on GitHub issues using the following formatting: - -```xml -text... -``` - -### 5. Fill-in-the-middle 🧩 - -Fill in the middle requires rearranging the model inputs. The playground handles this for you - all you need is to specify where to fill: - -```xml -code beforecode after -``` diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GAIR/Factool/factool/code/tool.py b/spaces/GAIR/Factool/factool/code/tool.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train10_gptmixcliport3_small.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train10_gptmixcliport3_small.sh deleted file mode 100644 index 5276fd7c1a1a024eb05f9cdec5fcc9da88f1bf45..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train10_gptmixcliport3_small.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -#SBATCH -c 10 -#SBATCH -n 1 -#SBATCH -o logs/%j.out -#SBATCH --exclusive -STEPS=${1-'50000'} - - -sh scripts/traintest_scripts/train_test_multi_task_goal_small.sh data \ - "[put-block-in-bowl,align-box-corner,stack-block-pyramid-seq,color-sorted-container-stack,color-sorted-block-race,Four-corner-pyramid-challenge,triangle-block-arrangement,sort-and-stack-clr-blocks,color-coordinated-sphere-insertion,rainbow-stack,align-pair-colored-blocks-along-line,vertical-insertion-blocks,stack-blocks-in-container]" \ - "[put-block-in-bowl,align-box-corner,stack-block-pyramid-seq]" \ - gpt5_mixcliport3_task $STEPS diff --git a/spaces/GeorgeOrville/bingo/src/app/page.tsx b/spaces/GeorgeOrville/bingo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
- - - ) -} diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/utils/data_utils.py b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/utils/data_utils.py deleted file mode 100644 index f1ba79f4a2d5cc2b97dce76d87bf6e7cdebbc257..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/utils/data_utils.py +++ /dev/null @@ -1,25 +0,0 @@ -""" -Code adopted from pix2pixHD: -https://github.com/NVIDIA/pix2pixHD/blob/master/data/image_folder.py -""" -import os - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tiff' -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def make_dataset(dir): - images = [] - assert os.path.isdir(dir), '%s is not a valid directory' % dir - for root, _, fnames in sorted(os.walk(dir)): - for fname in fnames: - if is_image_file(fname): - path = os.path.join(root, fname) - images.append(path) - return images diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/post_processing/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/post_processing/__init__.py deleted file mode 100644 index 880b3f06609b050aae163b2e38088c1ee4aa0998..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/post_processing/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .bbox_nms import fast_nms, multiclass_nms -from .merge_augs import (merge_aug_bboxes, merge_aug_masks, - merge_aug_proposals, merge_aug_scores) - -__all__ = [ - 'multiclass_nms', 'merge_aug_proposals', 'merge_aug_bboxes', - 'merge_aug_scores', 'merge_aug_masks', 'fast_nms' -] diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py deleted file mode 100644 index 8cc2a7174b765b7ad8808489196e12082a91a2d7..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.tasks import register_task -from fairseq.tasks.multilingual_translation import MultilingualTranslationTask -from fairseq.utils import safe_hasattr - -from .loss.latent_depth import LatentLayersKLLoss, LatentLayersSparsityLoss - - -@register_task("multilingual_translation_latent_depth") -class MultilingualTranslationTaskLatentDepth(MultilingualTranslationTask): - """A task for multiple translation with latent depth. - - See `"Deep Transformer with Latent Depth" - (Li et al., 2020) `_. - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - MultilingualTranslationTask.add_args(parser) - parser.add_argument('--encoder-latent-layer', action='store_true', help='latent layer selection in encoder') - parser.add_argument('--decoder-latent-layer', action='store_true', help='latent layer selection in decoder') - parser.add_argument('--target-layers', default=-1, type=int, - help='number of effective layers to learn; -1 means no constraint') - parser.add_argument('--sparsity-weight', default=0.0, type=float, - help='weight for sparsity loss') - parser.add_argument('--share-weight', default=0.0, type=float, - help='weight for sharing loss') - parser.add_argument('--soft-update', default=1, type=int, - help='number of updates with soft sampling') - parser.add_argument('--anneal-updates', default=1, type=int, - help='number of updates to anneal the KL loss weight') - parser.add_argument('--prior', default="uniform", type=str, - help='prior used for computing KL loss') - # fmt: on - - def __init__(self, args, dicts, training): - super().__init__(args, dicts, training) - self.src_langs, self.tgt_langs = zip( - *[(lang.split("-")[0], lang.split("-")[1]) for lang in args.lang_pairs] - ) - if self.training and self.encoder_latent_layer: - assert self.args.share_encoders - if self.training and self.decoder_latent_layer: - assert self.args.share_decoders - if training or self.encoder_latent_layer or self.decoder_latent_layer: - self.lang_pairs = args.lang_pairs - else: - self.lang_pairs = ["{}-{}".format(args.source_lang, args.target_lang)] - self.eval_lang_pairs = self.lang_pairs - self.model_lang_pairs = self.lang_pairs - if self.training and (self.encoder_latent_layer or self.decoder_latent_layer): - self.kl_loss = LatentLayersKLLoss(self.args) - self.sparsity_loss = LatentLayersSparsityLoss(self.args) - - def _per_lang_pair_train_loss( - self, lang_pair, model, update_num, criterion, sample, optimizer, ignore_grad - ): - src, tgt = lang_pair.split("-") - if self.encoder_latent_layer: - src_lang_idx = self.src_lang_idx_dict[src] - model.models[lang_pair].encoder.set_lang_idx(src_lang_idx) - model.models[lang_pair].encoder.layer_select.hard_select = ( - update_num > self.args.soft_update - ) - if self.decoder_latent_layer: - tgt_lang_idx = self.tgt_lang_idx_dict[tgt] - model.models[lang_pair].decoder.set_lang_idx(tgt_lang_idx) - model.models[lang_pair].decoder.layer_select.hard_select = ( - update_num > self.args.soft_update - ) - - loss, sample_size, logging_output = criterion( - model.models[lang_pair], sample[lang_pair] - ) - if self.encoder_latent_layer: - none_samples = sum( - 1 if x is None else 0 - for x in model.models[lang_pair].encoder.layer_select.layer_samples - ) - if none_samples == 0 or self.args.prior != "agged_posterior": - loss += self.kl_loss( - model.models[lang_pair].encoder.layer_select.layer_samples, - src_lang_idx, - update_num, - sample_size, - ) - if self.decoder_latent_layer: - none_samples = sum( - 1 if x is None else 0 - for x in model.models[lang_pair].decoder.layer_select.layer_samples - ) - if none_samples == 0 or self.args.prior != "agged_posterior": - loss += self.kl_loss( - model.models[lang_pair].decoder.layer_select.layer_samples, - tgt_lang_idx, - update_num, - sample_size, - ) - if ignore_grad: - loss *= 0 - - if hasattr(self, "sparsity_loss") and self.sparsity_loss.is_valid(update_num): - # need to retain the graph if sparsity loss needs to be added - loss.backward(retain_graph=True) - else: - optimizer.backward(loss) - - return loss, sample_size, logging_output - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - agg_loss, agg_sample_size, agg_logging_output = super().train_step( - sample, model, criterion, optimizer, update_num, ignore_grad - ) - # compute auxiliary loss from layere sparsity, based on all samples from all languages - if hasattr(self, "sparsity_loss") and self.sparsity_loss.is_valid(update_num): - sparsity_loss = 0 - if self.encoder_latent_layer: - sparsity_loss += self.sparsity_loss( - next( - iter(model.models.values()) - ).encoder.layer_select.layer_samples, - update_num, - agg_sample_size, - ) - if self.decoder_latent_layer: - sparsity_loss += self.sparsity_loss( - next( - iter(model.models.values()) - ).decoder.layer_select.layer_samples, - update_num, - agg_sample_size, - ) - if sparsity_loss > 0: - optimizer.backward(sparsity_loss) - return agg_loss, agg_sample_size, agg_logging_output - - def _per_lang_pair_valid_loss(self, lang_pair, model, criterion, sample): - src, tgt = lang_pair.split("-") - if self.encoder_latent_layer: - src_lang_idx = self.src_lang_idx_dict[src] - model.models[lang_pair].encoder.set_lang_idx(src_lang_idx) - if self.decoder_latent_layer: - tgt_lang_idx = self.tgt_lang_idx_dict[tgt] - model.models[lang_pair].decoder.set_lang_idx(tgt_lang_idx) - loss, sample_size, logging_output = criterion( - model.models[lang_pair], sample[lang_pair] - ) - return loss, sample_size, logging_output - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - if self.encoder_latent_layer or self.decoder_latent_layer: - for model in models: - if self.encoder_latent_layer: - assert model.encoder.layer_select is not None - src_lang_idx = self.src_lang_idx_dict[self.args.source_lang] - model.encoder.set_lang_idx(src_lang_idx) - if self.decoder_latent_layer: - assert model.decoder.layer_select is not None - tgt_lang_idx = self.tgt_lang_idx_dict[self.args.target_lang] - model.decoder.set_lang_idx(tgt_lang_idx) - return super().inference_step( - generator, models, sample, prefix_tokens, constraints - ) - - @property - def encoder_latent_layer(self): - return ( - safe_hasattr(self.args, "encoder_latent_layer") - and self.args.encoder_latent_layer - ) - - @property - def decoder_latent_layer(self): - return ( - safe_hasattr(self.args, "decoder_latent_layer") - and self.args.decoder_latent_layer - ) - - @property - def src_lang_idx_dict(self): - return {lang: lang_idx for lang_idx, lang in enumerate(self.src_langs)} - - @property - def tgt_lang_idx_dict(self): - return {lang: lang_idx for lang_idx, lang in enumerate(self.tgt_langs)} diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py deleted file mode 100644 index 2a287a4e97c66acbd36897b25f2ece5494005f03..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -import time -import torch -import sys -import subprocess - -argslist = list(sys.argv)[1:] -log_dir = argslist[-1] -num_gpus = torch.cuda.device_count() -argslist.append('--n_gpus={}'.format(num_gpus)) -workers = [] -job_id = time.strftime("%Y_%m_%d-%H%M%S") -argslist.append("--group_name=group_{}".format(job_id)) - -print("GPU log directory is {}".format(log_dir)) -os.makedirs(log_dir, exist_ok=True) -for i in range(num_gpus): - argslist.append('--rank={}'.format(i)) - stdout = None if i == 0 else open("{}/{}_GPU_{}.log".format(log_dir, job_id, i), - "w") - print(argslist) - p = subprocess.Popen([str(sys.executable)]+argslist, stdout=stdout) - workers.append(p) - argslist = argslist[:-1] - -for p in workers: - p.wait() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation_moe/translation_moe_src/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation_moe/translation_moe_src/__init__.py deleted file mode 100644 index c0abe53e973b4bb31cfb062708965d002c79b6e7..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation_moe/translation_moe_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import translation_moe # noqa diff --git a/spaces/HgMenon/Transcribe_V0.2/src/segments.py b/spaces/HgMenon/Transcribe_V0.2/src/segments.py deleted file mode 100644 index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000 --- a/spaces/HgMenon/Transcribe_V0.2/src/segments.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import Any, Dict, List - -import copy - -def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1): - result = [] - - if len(timestamps) == 0: - return result - if max_merge_size is None: - return timestamps - - if padding_left is None: - padding_left = 0 - if padding_right is None: - padding_right = 0 - - processed_time = 0 - current_segment = None - - for i in range(len(timestamps)): - next_segment = timestamps[i] - - delta = next_segment['start'] - processed_time - - # Note that segments can still be longer than the max merge size, they just won't be merged in that case - if current_segment is None or (merge_window is not None and delta > merge_window) \ - or next_segment['end'] - current_segment['start'] > max_merge_size: - # Finish the current segment - if current_segment is not None: - # Add right padding - finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right - current_segment['end'] += finish_padding - delta -= finish_padding - - result.append(current_segment) - - # Start a new segment - current_segment = copy.deepcopy(next_segment) - - # Pad the segment - current_segment['start'] = current_segment['start'] - min(padding_left, delta) - processed_time = current_segment['end'] - - else: - # Merge the segment - current_segment['end'] = next_segment['end'] - processed_time = current_segment['end'] - - # Add the last segment - if current_segment is not None: - current_segment['end'] += padding_right - result.append(current_segment) - - return result \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/run_img_sampling.py b/spaces/Hoodady/3DFuse/run_img_sampling.py deleted file mode 100644 index 8ceafaf17f73cab06077747a6e5c7162ca59a181..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/run_img_sampling.py +++ /dev/null @@ -1,19 +0,0 @@ -from adapt_sd import StableDiffusion - -from my.config import BaseConf - - -class SD(BaseConf): - """Stable Diffusion""" - variant: str = "v1" - v2_highres: bool = False - prompt: str = "a photograph of an astronaut riding a horse" - scale: float = 3.0 # classifier free guidance scale - precision: str = 'autocast' - dir: str = './' - alpha: float = 0.0 # merge scale - - def make(self): - args = self.dict() - model = StableDiffusion(**args) - return model \ No newline at end of file diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/losses/segmentation.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/losses/segmentation.py deleted file mode 100644 index 4ba77deb5159a6307ed2acba9945e4764a4ff0a5..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/losses/segmentation.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - - -class BCELoss(nn.Module): - def forward(self, prediction, target): - loss = F.binary_cross_entropy_with_logits(prediction,target) - return loss, {} - - -class BCELossWithQuant(nn.Module): - def __init__(self, codebook_weight=1.): - super().__init__() - self.codebook_weight = codebook_weight - - def forward(self, qloss, target, prediction, split): - bce_loss = F.binary_cross_entropy_with_logits(prediction,target) - loss = bce_loss + self.codebook_weight*qloss - return loss, {"{}/total_loss".format(split): loss.clone().detach().mean(), - "{}/bce_loss".format(split): bce_loss.detach().mean(), - "{}/quant_loss".format(split): qloss.detach().mean() - } diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/constants.py b/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/constants.py deleted file mode 100644 index 4f159cfe9ac72b0524228fe290181c6898787265..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/constants.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from enum import Enum, EnumMeta -from typing import List - - -class StrEnumMeta(EnumMeta): - # this is workaround for submitit pickling leading to instance checks failing in hydra for StrEnum, see - # https://github.com/facebookresearch/hydra/issues/1156 - @classmethod - def __instancecheck__(cls, other): - return "enum" in str(type(other)) - - -class StrEnum(Enum, metaclass=StrEnumMeta): - def __str__(self): - return self.value - - def __eq__(self, other: str): - return self.value == other - - def __repr__(self): - return self.value - - def __hash__(self): - return hash(str(self)) - - -def ChoiceEnum(choices: List[str]): - """return the Enum class used to enforce list of choices""" - return StrEnum("Choices", {k: k for k in choices}) - - -LOG_FORMAT_CHOICES = ChoiceEnum(["json", "none", "simple", "tqdm"]) -DDP_BACKEND_CHOICES = ChoiceEnum([ - "c10d", # alias for pytorch_ddp - "fully_sharded", # FullyShardedDataParallel from fairscale - "legacy_ddp", - "no_c10d", # alias for legacy_ddp - "pytorch_ddp", - "slow_mo", -]) -DDP_COMM_HOOK_CHOICES = ChoiceEnum(["none", "fp16"]) -DATASET_IMPL_CHOICES = ChoiceEnum(["raw", "lazy", "cached", "mmap", "fasta", "huffman"]) -GENERATION_CONSTRAINTS_CHOICES = ChoiceEnum(["ordered", "unordered"]) -GENERATION_DECODING_FORMAT_CHOICES = ChoiceEnum( - ["unigram", "ensemble", "vote", "dp", "bs"] -) -ZERO_SHARDING_CHOICES = ChoiceEnum(["none", "os"]) -PIPELINE_CHECKPOINT_CHOICES = ChoiceEnum(["always", "never", "except_last"]) -PRINT_ALIGNMENT_CHOICES = ChoiceEnum(["hard", "soft"]) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/speech_generator.py b/spaces/ICML2022/OFA/fairseq/fairseq/speech_generator.py deleted file mode 100644 index 8086e34d2b56fa808d0905b1a00e87e6736fcf04..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/speech_generator.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import numpy as np - -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig - - -class SpeechGenerator(object): - def __init__(self, model, vocoder, data_cfg: S2TDataConfig): - self.model = model - self.vocoder = vocoder - stats_npz_path = data_cfg.global_cmvn_stats_npz - self.gcmvn_stats = None - if stats_npz_path is not None: - self.gcmvn_stats = np.load(stats_npz_path) - - def gcmvn_denormalize(self, x): - # x: B x T x C - if self.gcmvn_stats is None: - return x - mean = torch.from_numpy(self.gcmvn_stats["mean"]).to(x) - std = torch.from_numpy(self.gcmvn_stats["std"]).to(x) - assert len(x.shape) == 3 and mean.shape[0] == std.shape[0] == x.shape[2] - x = x * std.view(1, 1, -1).expand_as(x) - return x + mean.view(1, 1, -1).expand_as(x) - - def get_waveform(self, feat): - # T x C -> T - return None if self.vocoder is None else self.vocoder(feat).squeeze(0) - - -class AutoRegressiveSpeechGenerator(SpeechGenerator): - def __init__( - self, model, vocoder, data_cfg, max_iter: int = 6000, - eos_prob_threshold: float = 0.5, - ): - super().__init__(model, vocoder, data_cfg) - self.max_iter = max_iter - self.eos_prob_threshold = eos_prob_threshold - - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lengths = sample["net_input"]["src_lengths"] - bsz, src_len = src_tokens.size() - n_frames_per_step = model.decoder.n_frames_per_step - out_dim = model.decoder.out_dim - raw_dim = out_dim // n_frames_per_step - - # initialize - encoder_out = model.forward_encoder(src_tokens, src_lengths, - speaker=sample["speaker"]) - incremental_state = {} - feat, attn, eos_prob = [], [], [] - finished = src_tokens.new_zeros((bsz,)).bool() - out_lens = src_lengths.new_zeros((bsz,)).long().fill_(self.max_iter) - - prev_feat_out = encoder_out["encoder_out"][0].new_zeros(bsz, 1, out_dim) - for step in range(self.max_iter): - cur_out_lens = out_lens.clone() - cur_out_lens.masked_fill_(cur_out_lens.eq(self.max_iter), step + 1) - _, cur_eos_out, cur_extra = model.forward_decoder( - prev_feat_out, encoder_out=encoder_out, - incremental_state=incremental_state, - target_lengths=cur_out_lens, speaker=sample["speaker"], **kwargs - ) - cur_eos_prob = torch.sigmoid(cur_eos_out).squeeze(2) - feat.append(cur_extra['feature_out']) - attn.append(cur_extra['attn']) - eos_prob.append(cur_eos_prob) - - cur_finished = (cur_eos_prob.squeeze(1) > self.eos_prob_threshold) - out_lens.masked_fill_((~finished) & cur_finished, step + 1) - finished = finished | cur_finished - if finished.sum().item() == bsz: - break - prev_feat_out = cur_extra['feature_out'] - - feat = torch.cat(feat, dim=1) - feat = model.decoder.postnet(feat) + feat - eos_prob = torch.cat(eos_prob, dim=1) - attn = torch.cat(attn, dim=2) - alignment = attn.max(dim=1)[1] - - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - out_lens = out_lens * n_frames_per_step - - finalized = [ - { - 'feature': feat[b, :out_len], - 'eos_prob': eos_prob[b, :out_len], - 'attn': attn[b, :, :out_len], - 'alignment': alignment[b, :out_len], - 'waveform': self.get_waveform(feat[b, :out_len]), - } - for b, out_len in zip(range(bsz), out_lens) - ] - - if has_targ: - assert sample["target"].size(-1) == out_dim - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class NonAutoregressiveSpeechGenerator(SpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - bsz, max_src_len = sample["net_input"]["src_tokens"].size() - n_frames_per_step = model.encoder.n_frames_per_step - out_dim = model.encoder.out_dim - raw_dim = out_dim // n_frames_per_step - - feat, out_lens, log_dur_out, _, _ = model( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=sample["target_lengths"], - speaker=sample["speaker"] - ) - - feat = feat.view(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - dur_out = torch.clamp( - torch.round(torch.exp(log_dur_out) - 1).long(), min=0 - ) - - def get_dur_plot_data(d): - r = [] - for i, dd in enumerate(d): - r += [i + 1] * dd.item() - return r - - out_lens = out_lens * n_frames_per_step - finalized = [ - { - 'feature': feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]), - 'waveform': self.get_waveform( - feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]) - ), - 'attn': feat.new_tensor(get_dur_plot_data(dur_out[b])), - } - for b, l in zip(range(bsz), out_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class TeacherForcingAutoRegressiveSpeechGenerator(AutoRegressiveSpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - prev_out_tokens = sample["net_input"]["prev_output_tokens"] - tgt_lens = sample["target_lengths"] - n_frames_per_step = model.decoder.n_frames_per_step - raw_dim = model.decoder.out_dim // n_frames_per_step - bsz = src_tokens.shape[0] - - feat, eos_prob, extra = model( - src_tokens, src_lens, prev_out_tokens, incremental_state=None, - target_lengths=tgt_lens, speaker=sample["speaker"] - ) - - attn = extra["attn"] # B x T_s x T_t - alignment = attn.max(dim=1)[1] - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - tgt_lens = sample["target_lengths"] * n_frames_per_step - - finalized = [ - { - 'feature': feat[b, :tgt_len], - 'eos_prob': eos_prob[b, :tgt_len], - 'attn': attn[b, :, :tgt_len], - 'alignment': alignment[b, :tgt_len], - 'waveform': self.get_waveform(feat[b, :tgt_len]), - } - for b, tgt_len in zip(range(bsz), tgt_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/clearml/__init__.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/clearml/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Illumotion/Koboldcpp/otherarch/llama_v2-util.h b/spaces/Illumotion/Koboldcpp/otherarch/llama_v2-util.h deleted file mode 100644 index 41b6df386931abeb09693c975c520230d82fd20e..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/llama_v2-util.h +++ /dev/null @@ -1,468 +0,0 @@ -// Internal header to be included only by llama.cpp. -// Contains wrappers around OS interfaces. -#pragma once -#ifndef LLAMA_V2_UTIL_H -#define LLAMA_V2_UTIL_H - -#include "llama-util.h" - -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -#ifdef __has_include - #if __has_include() - #include - #if defined(_POSIX_MAPPED_FILES) - #include - #endif - #if defined(_POSIX_MEMLOCK_RANGE) - #include - #endif - #endif -#endif - -#if defined(_WIN32) - #define WIN32_LEAN_AND_MEAN - #ifndef NOMINMAX - #define NOMINMAX - #endif - #include - #include - #include // for _fseeki64 -#endif - -#define LLAMA_V2_ASSERT(x) \ - do { \ - if (!(x)) { \ - fprintf(stderr, "LLAMA_V2_ASSERT: %s:%d: %s\n", __FILE__, __LINE__, #x); \ - abort(); \ - } \ - } while (0) - -#ifdef __GNUC__ -#ifdef __MINGW32__ -__attribute__((format_old(gnu_printf, 1, 2))) -#else -__attribute__((format_old(printf, 1, 2))) -#endif -#endif - - -struct llama_v2_file { - // use FILE * so we don't have to re-open the file to mmap - FILE * fp; - size_t size; - - llama_v2_file(const char * fname, const char * mode) { - fp = std::fopen(fname, mode); - if (fp == NULL) { - throw std::runtime_error(format_old("failed to open %s: %s", fname, strerror(errno))); - } - seek(0, SEEK_END); - size = tell(); - seek(0, SEEK_SET); - } - - size_t tell() const { -#ifdef _WIN32 - __int64 ret = _ftelli64(fp); -#else - long ret = std::ftell(fp); -#endif - LLAMA_V2_ASSERT(ret != -1); // this really shouldn't fail - return (size_t) ret; - } - - void seek(size_t offset, int whence) { -#ifdef _WIN32 - int ret = _fseeki64(fp, (__int64) offset, whence); -#else - int ret = std::fseek(fp, (long) offset, whence); -#endif - LLAMA_V2_ASSERT(ret == 0); // same - } - - void read_raw(void * ptr, size_t size) { - if (size == 0) { - return; - } - errno = 0; - std::size_t ret = std::fread(ptr, size, 1, fp); - if (ferror(fp)) { - throw std::runtime_error(format_old("read error: %s", strerror(errno))); - } - if (ret != 1) { - throw std::runtime_error(std::string("unexpectedly reached end of file")); - } - } - - std::uint32_t read_u32() { - std::uint32_t ret; - read_raw(&ret, sizeof(ret)); - return ret; - } - - std::string read_string(std::uint32_t len) { - std::vector chars(len); - read_raw(chars.data(), len); - return std::string(chars.data(), len); - } - - void write_raw(const void * ptr, size_t size) { - if (size == 0) { - return; - } - errno = 0; - size_t ret = std::fwrite(ptr, size, 1, fp); - if (ret != 1) { - throw std::runtime_error(format_old("write error: %s", strerror(errno))); - } - } - - void write_u32(std::uint32_t val) { - write_raw(&val, sizeof(val)); - } - - ~llama_v2_file() { - if (fp) { - std::fclose(fp); - } - } -}; - -#if defined(_WIN32) -static std::string llama_v2_format_win_err(DWORD err) { - LPSTR buf; - size_t size = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, - NULL, err, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPSTR)&buf, 0, NULL); - if (!size) { - return "FormatMessageA failed"; - } - std::string ret(buf, size); - LocalFree(buf); - return ret; -} -#endif - -struct llama_v2_mmap { - void * addr; - size_t size; - - llama_v2_mmap(const llama_v2_mmap &) = delete; - -#ifdef _POSIX_MAPPED_FILES - static constexpr bool SUPPORTED = true; - - llama_v2_mmap(struct llama_v2_file * file, bool prefetch = true) { - size = file->size; - int fd = fileno(file->fp); - int flags = MAP_SHARED; -#ifdef __linux__ - flags |= MAP_POPULATE; -#endif - addr = mmap(NULL, file->size, PROT_READ, flags, fd, 0); - if (addr == MAP_FAILED) { - throw std::runtime_error(format_old("mmap failed: %s", strerror(errno))); - } - - if (prefetch) { - // Advise the kernel to preload the mapped memory - if (madvise(addr, file->size, MADV_WILLNEED)) { - fprintf(stderr, "warning: madvise(.., MADV_WILLNEED) failed: %s\n", - strerror(errno)); - } - } - } - - ~llama_v2_mmap() { - munmap(addr, size); - } -#elif defined(_WIN32) - static constexpr bool SUPPORTED = true; - - llama_v2_mmap(struct llama_v2_file * file, bool prefetch = true) { - size = file->size; - - HANDLE hFile = (HANDLE) _get_osfhandle(_fileno(file->fp)); - - HANDLE hMapping = CreateFileMappingA(hFile, NULL, PAGE_READONLY, 0, 0, NULL); - DWORD error = GetLastError(); - - if (hMapping == NULL) { - throw std::runtime_error(format_old("CreateFileMappingA failed: %s", llama_v2_format_win_err(error).c_str())); - } - - addr = MapViewOfFile(hMapping, FILE_MAP_READ, 0, 0, 0); - error = GetLastError(); - CloseHandle(hMapping); - - if (addr == NULL) { - throw std::runtime_error(format_old("MapViewOfFile failed: %s", llama_v2_format_win_err(error).c_str())); - } - - #ifndef USE_FAILSAFE - #if _WIN32_WINNT >= _WIN32_WINNT_WIN8 - if (prefetch) { - // Advise the kernel to preload the mapped memory - WIN32_MEMORY_RANGE_ENTRY range; - range.VirtualAddress = addr; - range.NumberOfBytes = (SIZE_T)size; - if (!PrefetchVirtualMemory(GetCurrentProcess(), 1, &range, 0)) { - fprintf(stderr, "warning: PrefetchVirtualMemory failed: %s\n", - llama_v2_format_win_err(GetLastError()).c_str()); - } - } - #else - #pragma message("warning: You are building for pre-Windows 8; prefetch not supported") - #endif // _WIN32_WINNT >= _WIN32_WINNT_WIN8 - #else - printf("\nPrefetchVirtualMemory skipped in compatibility mode.\n"); - #endif - } - - ~llama_v2_mmap() { - if (!UnmapViewOfFile(addr)) { - fprintf(stderr, "warning: UnmapViewOfFile failed: %s\n", - llama_v2_format_win_err(GetLastError()).c_str()); - } - } -#else - static constexpr bool SUPPORTED = false; - - llama_v2_mmap(struct llama_v2_file *, bool prefetch = true) { - (void)prefetch; - throw std::runtime_error(std::string("mmap not supported")); - } -#endif -}; - -// Represents some region of memory being locked using mlock or VirtualLock; -// will automatically unlock on destruction. -struct llama_v2_mlock { - void * addr = NULL; - size_t size = 0; - bool failed_already = false; - - llama_v2_mlock() {} - llama_v2_mlock(const llama_v2_mlock &) = delete; - - ~llama_v2_mlock() { - if (size) { - raw_unlock(addr, size); - } - } - - void init(void * addr) { - LLAMA_V2_ASSERT(this->addr == NULL && this->size == 0); - this->addr = addr; - } - - void grow_to(size_t target_size) { - LLAMA_V2_ASSERT(addr); - if (failed_already) { - return; - } - size_t granularity = lock_granularity(); - target_size = (target_size + granularity - 1) & ~(granularity - 1); - if (target_size > size) { - if (raw_lock((uint8_t *) addr + size, target_size - size)) { - size = target_size; - } else { - failed_already = true; - } - } - } - -#ifdef _POSIX_MEMLOCK_RANGE - static constexpr bool SUPPORTED = true; - - size_t lock_granularity() { - return (size_t) sysconf(_SC_PAGESIZE); - } - - #ifdef __APPLE__ - #define MLOCK_SUGGESTION \ - "Try increasing the sysctl values 'vm.user_wire_limit' and 'vm.global_user_wire_limit' and/or " \ - "decreasing 'vm.global_no_user_wire_amount'. Also try increasing RLIMIT_MLOCK (ulimit -l).\n" - #else - #define MLOCK_SUGGESTION \ - "Try increasing RLIMIT_MLOCK ('ulimit -l' as root).\n" - #endif - - bool raw_lock(const void * addr, size_t size) { - if (!mlock(addr, size)) { - return true; - } else { - char* errmsg = std::strerror(errno); - bool suggest = (errno == ENOMEM); - - // Check if the resource limit is fine after all - struct rlimit lock_limit; - if (suggest && getrlimit(RLIMIT_MEMLOCK, &lock_limit)) - suggest = false; - if (suggest && (lock_limit.rlim_max > lock_limit.rlim_cur + size)) - suggest = false; - - fprintf(stderr, "warning: failed to mlock %zu-byte buffer (after previously locking %zu bytes): %s\n%s", - size, this->size, errmsg, suggest ? MLOCK_SUGGESTION : ""); - return false; - } - } - - #undef MLOCK_SUGGESTION - - void raw_unlock(void * addr, size_t size) { - if (munlock(addr, size)) { - fprintf(stderr, "warning: failed to munlock buffer: %s\n", std::strerror(errno)); - } - } -#elif defined(_WIN32) - static constexpr bool SUPPORTED = true; - - size_t lock_granularity() { - SYSTEM_INFO si; - GetSystemInfo(&si); - return (size_t) si.dwPageSize; - } - - bool raw_lock(void * addr, size_t size) { - for (int tries = 1; ; tries++) { - if (VirtualLock(addr, size)) { - return true; - } - if (tries == 2) { - fprintf(stderr, "warning: failed to VirtualLock %zu-byte buffer (after previously locking %zu bytes): %s\n", - size, this->size, llama_v2_format_win_err(GetLastError()).c_str()); - return false; - } - - // It failed but this was only the first try; increase the working - // set size and try again. - SIZE_T min_ws_size, max_ws_size; - if (!GetProcessWorkingSetSize(GetCurrentProcess(), &min_ws_size, &max_ws_size)) { - fprintf(stderr, "warning: GetProcessWorkingSetSize failed: %s\n", - llama_v2_format_win_err(GetLastError()).c_str()); - return false; - } - // Per MSDN: "The maximum number of pages that a process can lock - // is equal to the number of pages in its minimum working set minus - // a small overhead." - // Hopefully a megabyte is enough overhead: - size_t increment = size + 1048576; - // The minimum must be <= the maximum, so we need to increase both: - min_ws_size += increment; - max_ws_size += increment; - if (!SetProcessWorkingSetSize(GetCurrentProcess(), min_ws_size, max_ws_size)) { - fprintf(stderr, "warning: SetProcessWorkingSetSize failed: %s\n", - llama_v2_format_win_err(GetLastError()).c_str()); - return false; - } - } - } - - void raw_unlock(void * addr, size_t size) { - if (!VirtualUnlock(addr, size)) { - fprintf(stderr, "warning: failed to VirtualUnlock buffer: %s\n", - llama_v2_format_win_err(GetLastError()).c_str()); - } - } -#else - static constexpr bool SUPPORTED = false; - - size_t lock_granularity() { - return (size_t) 65536; - } - - bool raw_lock(const void * addr, size_t size) { - fprintf(stderr, "warning: mlock not supported on this system\n"); - return false; - } - - void raw_unlock(const void * addr, size_t size) {} -#endif -}; - -// Replacement for std::vector that doesn't require zero-initialization. -struct llama_v2_buffer { - uint8_t * addr = NULL; - size_t size = 0; - - llama_v2_buffer() = default; - - void resize(size_t size) { - delete[] addr; - addr = new uint8_t[size]; - this->size = size; - } - - ~llama_v2_buffer() { - delete[] addr; - } - - // disable copy and move - llama_v2_buffer(const llama_v2_buffer&) = delete; - llama_v2_buffer(llama_v2_buffer&&) = delete; - llama_v2_buffer& operator=(const llama_v2_buffer&) = delete; - llama_v2_buffer& operator=(llama_v2_buffer&&) = delete; -}; - -#ifdef GGML_USE_CUBLAS -#include "ggml_v2-cuda.h" -struct llama_v2_ctx_buffer { - uint8_t * addr = NULL; - bool is_cuda; - size_t size = 0; - - llama_v2_ctx_buffer() = default; - - void resize(size_t size) { - free(); - - addr = (uint8_t *) ggml_v2_cuda_host_malloc(size); - if (addr) { - is_cuda = true; - } - else { - // fall back to pageable memory - addr = new uint8_t[size]; - is_cuda = false; - } - this->size = size; - } - - void free() { - if (addr) { - if (is_cuda) { - ggml_v2_cuda_host_free(addr); - } - else { - delete[] addr; - } - } - addr = NULL; - } - - ~llama_v2_ctx_buffer() { - free(); - } - - // disable copy and move - llama_v2_ctx_buffer(const llama_v2_ctx_buffer&) = delete; - llama_v2_ctx_buffer(llama_v2_ctx_buffer&&) = delete; - llama_v2_ctx_buffer& operator=(const llama_v2_ctx_buffer&) = delete; - llama_v2_ctx_buffer& operator=(llama_v2_ctx_buffer&&) = delete; -}; -#else -typedef llama_v2_buffer llama_v2_ctx_buffer; -#endif - -#endif diff --git a/spaces/Izal887/rvc-ram12/lib/infer_pack/transforms.py b/spaces/Izal887/rvc-ram12/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/Izal887/rvc-ram12/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/JUNGU/face-swap/app.py b/spaces/JUNGU/face-swap/app.py deleted file mode 100644 index 44fe35665ef46bf9ad7030d6a59f5a29dd64a80d..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/face-swap/app.py +++ /dev/null @@ -1,217 +0,0 @@ -import gradio -from huggingface_hub import Repository -import os - -from utils.utils import norm_crop, estimate_norm, inverse_estimate_norm, transform_landmark_points, get_lm -from networks.layers import AdaIN, AdaptiveAttention -from tensorflow_addons.layers import InstanceNormalization -import numpy as np -import cv2 -from scipy.ndimage import gaussian_filter - -from tensorflow.keras.models import load_model -from options.swap_options import SwapOptions - - - -token = os.environ['model_fetch'] - -opt = SwapOptions().parse() - -retina_repo = Repository(local_dir="retina_model", clone_from="felixrosberg/retinaface_resnet50", - private=True, use_auth_token=token, git_user="felixrosberg") - -from retina_model.models import * - -RetinaFace = load_model("retina_model/retinaface_res50.h5", - custom_objects={"FPN": FPN, - "SSH": SSH, - "BboxHead": BboxHead, - "LandmarkHead": LandmarkHead, - "ClassHead": ClassHead}) - -arc_repo = Repository(local_dir="arcface_model", clone_from="felixrosberg/arcface_tf", - private=True, use_auth_token=token) -ArcFace = load_model("arcface_model/arc_res50.h5") -ArcFaceE = load_model("arcface_model/arc_res50e.h5") - -g_repo = Repository(local_dir="g_model_c_hq", clone_from="felixrosberg/affa_config_c_hq", - private=True, use_auth_token=token) -G = load_model("g_model_c_hq/generator_t_28.h5", custom_objects={"AdaIN": AdaIN, - "AdaptiveAttention": AdaptiveAttention, - "InstanceNormalization": InstanceNormalization}) - -r_repo = Repository(local_dir="reconstruction_attack", clone_from="felixrosberg/reconstruction_attack", - private=True, use_auth_token=token) -R = load_model("reconstruction_attack/reconstructor_42.h5", custom_objects={"AdaIN": AdaIN, - "AdaptiveAttention": AdaptiveAttention, - "InstanceNormalization": InstanceNormalization}) - -permuter_repo = Repository(local_dir="identity_permuter", clone_from="felixrosberg/identitypermuter", - private=True, use_auth_token=token, git_user="felixrosberg") - -from identity_permuter.id_permuter import identity_permuter - -IDP = identity_permuter(emb_size=32, min_arg=False) -IDP.load_weights("identity_permuter/id_permuter.h5") - -blend_mask_base = np.zeros(shape=(256, 256, 1)) -blend_mask_base[80:244, 32:224] = 1 -blend_mask_base = gaussian_filter(blend_mask_base, sigma=7) - - -def run_inference(target, source, slider, adv_slider, settings): - try: - source = np.array(source) - target = np.array(target) - - # Prepare to load video - if "anonymize" not in settings: - source_a = RetinaFace(np.expand_dims(source, axis=0)).numpy()[0] - source_h, source_w, _ = source.shape - source_lm = get_lm(source_a, source_w, source_h) - source_aligned = norm_crop(source, source_lm, image_size=256) - source_z = ArcFace.predict(np.expand_dims(tf.image.resize(source_aligned, [112, 112]) / 255.0, axis=0)) - else: - source_z = None - - # read frame - im = target - im_h, im_w, _ = im.shape - im_shape = (im_w, im_h) - - detection_scale = im_w // 640 if im_w > 640 else 1 - - faces = RetinaFace(np.expand_dims(cv2.resize(im, - (im_w // detection_scale, - im_h // detection_scale)), axis=0)).numpy() - - total_img = im / 255.0 - for annotation in faces: - lm_align = np.array([[annotation[4] * im_w, annotation[5] * im_h], - [annotation[6] * im_w, annotation[7] * im_h], - [annotation[8] * im_w, annotation[9] * im_h], - [annotation[10] * im_w, annotation[11] * im_h], - [annotation[12] * im_w, annotation[13] * im_h]], - dtype=np.float32) - - # align the detected face - M, pose_index = estimate_norm(lm_align, 256, "arcface", shrink_factor=1.0) - im_aligned = (cv2.warpAffine(im, M, (256, 256), borderValue=0.0) - 127.5) / 127.5 - - if "adversarial defense" in settings: - eps = adv_slider / 200 - X = tf.convert_to_tensor(np.expand_dims(im_aligned, axis=0)) - with tf.GradientTape() as tape: - tape.watch(X) - - X_z = ArcFaceE(tf.image.resize(X * 0.5 + 0.5, [112, 112])) - output = R([X, X_z]) - - loss = tf.reduce_mean(tf.abs(0 - output)) - - gradient = tf.sign(tape.gradient(loss, X)) - - adv_x = X + eps * gradient - im_aligned = tf.clip_by_value(adv_x, -1, 1)[0] - - if "anonymize" in settings and "reconstruction attack" not in settings: - """source_z = ArcFace.predict(np.expand_dims(tf.image.resize(im_aligned, [112, 112]) / 255.0, axis=0)) - anon_ratio = int(512 * (slider / 100)) - anon_vector = np.ones(shape=(1, 512)) - anon_vector[:, :anon_ratio] = -1 - np.random.shuffle(anon_vector) - source_z *= anon_vector""" - - slider_weight = slider / 100 - - target_z = ArcFace.predict(np.expand_dims(tf.image.resize(im_aligned, [112, 112]) * 0.5 + 0.5, axis=0)) - source_z = IDP.predict(target_z) - - source_z = slider_weight * source_z + (1 - slider_weight) * target_z - - if "reconstruction attack" in settings: - source_z = ArcFaceE.predict(np.expand_dims(tf.image.resize(im_aligned, [112, 112]) * 0.5 + 0.5, axis=0)) - - # face swap - if "reconstruction attack" not in settings: - changed_face_cage = G.predict([np.expand_dims(im_aligned, axis=0), - source_z]) - changed_face = changed_face_cage[0] * 0.5 + 0.5 - - # get inverse transformation landmarks - transformed_lmk = transform_landmark_points(M, lm_align) - - # warp image back - iM, _ = inverse_estimate_norm(lm_align, transformed_lmk, 256, "arcface", shrink_factor=1.0) - iim_aligned = cv2.warpAffine(changed_face, iM, im_shape, borderValue=0.0) - - # blend swapped face with target image - blend_mask = cv2.warpAffine(blend_mask_base, iM, im_shape, borderValue=0.0) - blend_mask = np.expand_dims(blend_mask, axis=-1) - total_img = (iim_aligned * blend_mask + total_img * (1 - blend_mask)) - else: - changed_face_cage = R.predict([np.expand_dims(im_aligned, axis=0), - source_z]) - changed_face = changed_face_cage[0] * 0.5 + 0.5 - - # get inverse transformation landmarks - transformed_lmk = transform_landmark_points(M, lm_align) - - # warp image back - iM, _ = inverse_estimate_norm(lm_align, transformed_lmk, 256, "arcface", shrink_factor=1.0) - iim_aligned = cv2.warpAffine(changed_face, iM, im_shape, borderValue=0.0) - - # blend swapped face with target image - blend_mask = cv2.warpAffine(blend_mask_base, iM, im_shape, borderValue=0.0) - blend_mask = np.expand_dims(blend_mask, axis=-1) - total_img = (iim_aligned * blend_mask + total_img * (1 - blend_mask)) - - if "compare" in settings: - total_img = np.concatenate((im / 255.0, total_img), axis=1) - - total_img = np.clip(total_img, 0, 1) - total_img *= 255.0 - total_img = total_img.astype('uint8') - - return total_img - except Exception as e: - print(e) - return None - - -description = "Performs subject agnostic identity transfer from a source face to all target faces. \n\n" \ - "Implementation and demo of FaceDancer, accepted to WACV 2023. \n\n" \ - "Pre-print: https://arxiv.org/abs/2210.10473 \n\n" \ - "Code: https://github.com/felixrosberg/FaceDancer \n\n" \ - "\n\n" \ - "Options:\n\n" \ - "-Compare returns the target image concatenated with the results.\n\n" \ - "-Anonymize will ignore the source image and perform an identity permutation of target faces.\n\n" \ - "-Reconstruction attack will attempt to invert the face swap or the anonymization.\n\n" \ - "-Adversarial defense will add a permutation noise that disrupts the reconstruction attack.\n\n" \ - "NOTE: There is no guarantees with the anonymization process currently.\n\n" \ - "NOTE: source image with too high resolution may not work properly!" -examples = [["assets/rick.jpg", "assets/musk.jpg", 100, 10, ["compare"]], - ["assets/musk.jpg", "assets/musk.jpg", 100, 10, ["anonymize"]]] -article = """ -Demo is based of recent research from my Ph.D work. Results expects to be published in the coming months. -""" - -iface = gradio.Interface(run_inference, - [gradio.inputs.Image(shape=None, label='Target'), - gradio.inputs.Image(shape=None, label='Source'), - gradio.inputs.Slider(0, 100, default=100, label="Anonymization ratio (%)"), - gradio.inputs.Slider(0, 100, default=100, label="Adversarial defense ratio (%)"), - gradio.inputs.CheckboxGroup(["compare", - "anonymize", - "reconstruction attack", - "adversarial defense"], - label='Options')], - gradio.outputs.Image(), - title="Face Swap", - description=description, - examples=examples, - article=article, - layout="vertical") -iface.launch() diff --git a/spaces/JianYu233/bingo1/README.md b/spaces/JianYu233/bingo1/README.md deleted file mode 100644 index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000 --- a/spaces/JianYu233/bingo1/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: bingo -emoji: 😊 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
- -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -问题反馈请前往 https://github.com/weaigc/bingo/issues -
- - diff --git a/spaces/JimmyTarbender/GPT2HistoryEvents/app.py b/spaces/JimmyTarbender/GPT2HistoryEvents/app.py deleted file mode 100644 index afa097b1449cc72a77c3c424c2a7969e5780439c..0000000000000000000000000000000000000000 --- a/spaces/JimmyTarbender/GPT2HistoryEvents/app.py +++ /dev/null @@ -1,275 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -import os -import torch -import torch.nn as nn -from transformers.activations import get_activation -from transformers import AutoTokenizer, AutoModelForCausalLM - - -st.title('GPT2: To see all prompt outlines: https://huggingface.co/BigSalmon/BigSalmon/InformalToFormalLincoln91Paraphrase') - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -@st.cache(allow_output_mutation=True) -def get_model(): - tokenizer = AutoTokenizer.from_pretrained("BigSalmon/HistoryCurrentEvents") - model = AutoModelForCausalLM.from_pretrained("BigSalmon/HistoryCurrentEvents") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincolnMediumParaphraseConcise") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincolnMediumParaphraseConcise") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln91Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln91Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln88Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln88Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln86Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln86Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln79Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln79Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln74Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln74Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln72Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln72Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo1.3BInformalToFormal") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo1.3BInformalToFormal") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln55") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln55") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln51") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln51") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln45") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln49") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln43") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln43") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln41") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln41") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln38") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln38") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln37") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln37") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln36") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln36") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln31") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln31") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln21") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsOneSent") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsOneSent") - - #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToSentence") - #model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToSentence") - - return model, tokenizer - -model, tokenizer = get_model() - -g = """informal english: garage band has made people who know nothing about music good at creating music. -Translated into the Style of Abraham Lincoln: garage band ( offers the uninitiated in music the ability to produce professional-quality compositions / catapults those for whom music is an uncharted art the ability the realize masterpieces / stimulates music novice's competency to yield sublime arrangements / begets individuals of rudimentary musical talent the proficiency to fashion elaborate suites ). -informal english: chrome extensions can make doing regular tasks much easier to get done. -Translated into the Style of Abraham Lincoln: chrome extensions ( yield the boon of time-saving convenience / ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks / turbocharges the velocity with which one can conduct their obligations ). -informal english: broadband is finally expanding to rural areas, a great development that will thrust them into modern life. -Translated into the Style of Abraham Lincoln: broadband is ( ( finally / at last / after years of delay ) arriving in remote locations / springing to life in far-flung outposts / inching into even the most backwater corners of the nation ) that will leap-frog them into the twenty-first century. -informal english: google translate has made talking to people who do not share your language easier. -Translated into the Style of Abraham Lincoln: google translate ( imparts communicability to individuals whose native tongue differs / mitigates the trials of communication across linguistic barriers / hastens the bridging of semantic boundaries / mollifies the complexity of multilingual communication / avails itself to the internationalization of discussion / flexes its muscles to abet intercultural conversation / calms the tides of linguistic divergence ). -informal english: corn fields are all across illinois, visible once you leave chicago. -Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. -informal english: """ - -number_of_outputs = st.sidebar.slider("Number of Outputs", 5, 100) -log_nums = st.sidebar.slider("How Many Log Outputs?", 50, 600) - -def BestProbs(prompt): - prompt = prompt.strip() - text = tokenizer.encode(prompt) - myinput, past_key_values = torch.tensor([text]), None - myinput = myinput - logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) - logits = logits[0,-1] - probabilities = torch.nn.functional.softmax(logits) - best_logits, best_indices = logits.topk(10) - best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] - for i in best_words[0:10]: - print("_______") - st.write(f"${i} $\n") - f = (f"${i} $\n") - m = (prompt + f"{i}") - BestProbs2(m) - return f - -def BestProbs2(prompt): - prompt = prompt.strip() - text = tokenizer.encode(prompt) - myinput, past_key_values = torch.tensor([text]), None - myinput = myinput - logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) - logits = logits[0,-1] - probabilities = torch.nn.functional.softmax(logits) - best_logits, best_indices = logits.topk(20) - best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] - for i in best_words[0:20]: - print(i) - st.write(i) - -def LogProbs(prompt): - col1 = [] - col2 = [] - prompt = prompt.strip() - text = tokenizer.encode(prompt) - myinput, past_key_values = torch.tensor([text]), None - myinput = myinput - logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) - logits = logits[0,-1] - probabilities = torch.nn.functional.softmax(logits) - best_logits, best_indices = logits.topk(10) - best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] - for i in best_words[0:10]: - print("_______") - f = i - col1.append(f) - m = (prompt + f"{i}") - #print("^^" + f + " ^^") - prompt = m.strip() - text = tokenizer.encode(prompt) - myinput, past_key_values = torch.tensor([text]), None - myinput = myinput - logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) - logits = logits[0,-1] - probabilities = torch.nn.functional.softmax(logits) - best_logits, best_indices = logits.topk(20) - best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] - for i in best_words[0:20]: - #print(i) - col2.append(i) - #print(col1) - #print(col2) - d = {col1[0]: [col2[0], col2[1], col2[2], col2[3], col2[4], col2[5], col2[6], col2[7], col2[8], col2[9], col2[10], col2[11], col2[12], col2[13], col2[14], col2[15], col2[16], col2[17], col2[18], col2[19]], - col1[1]: [col2[20], col2[21], col2[22], col2[23], col2[24], col2[25], col2[26], col2[27], col2[28], col2[29], col2[30], col2[31], col2[32], col2[33], col2[34], col2[35], col2[36], col2[37], col2[38], col2[39]], - col1[2]: [col2[40], col2[41], col2[42], col2[43], col2[44], col2[45], col2[46], col2[47], col2[48], col2[49], col2[50], col2[51], col2[52], col2[53], col2[54], col2[55], col2[56], col2[57], col2[58], col2[59]], - col1[3]: [col2[60], col2[61], col2[62], col2[63], col2[64], col2[65], col2[66], col2[67], col2[68], col2[69], col2[70], col2[71], col2[72], col2[73], col2[74], col2[75], col2[76], col2[77], col2[78], col2[79]], - col1[4]: [col2[80], col2[81], col2[82], col2[83], col2[84], col2[85], col2[86], col2[87], col2[88], col2[89], col2[90], col2[91], col2[92], col2[93], col2[94], col2[95], col2[96], col2[97], col2[98], col2[99]], - col1[5]: [col2[100], col2[101], col2[102], col2[103], col2[104], col2[105], col2[106], col2[107], col2[108], col2[109], col2[110], col2[111], col2[112], col2[113], col2[114], col2[115], col2[116], col2[117], col2[118], col2[119]], - col1[6]: [col2[120], col2[121], col2[122], col2[123], col2[124], col2[125], col2[126], col2[127], col2[128], col2[129], col2[130], col2[131], col2[132], col2[133], col2[134], col2[135], col2[136], col2[137], col2[138], col2[139]], - col1[7]: [col2[140], col2[141], col2[142], col2[143], col2[144], col2[145], col2[146], col2[147], col2[148], col2[149], col2[150], col2[151], col2[152], col2[153], col2[154], col2[155], col2[156], col2[157], col2[158], col2[159]], - col1[8]: [col2[160], col2[161], col2[162], col2[163], col2[164], col2[165], col2[166], col2[167], col2[168], col2[169], col2[170], col2[171], col2[172], col2[173], col2[174], col2[175], col2[176], col2[177], col2[178], col2[179]], - col1[9]: [col2[180], col2[181], col2[182], col2[183], col2[184], col2[185], col2[186], col2[187], col2[188], col2[189], col2[190], col2[191], col2[192], col2[193], col2[194], col2[195], col2[196], col2[197], col2[198], col2[199]]} - df = pd.DataFrame(data=d) - print(df) - st.write(df) - return df - -def BestProbs5(prompt): - prompt = prompt.strip() - text = tokenizer.encode(prompt) - myinput, past_key_values = torch.tensor([text]), None - myinput = myinput - logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) - logits = logits[0,-1] - probabilities = torch.nn.functional.softmax(logits) - best_logits, best_indices = logits.topk(number_of_outputs) - best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] - for i in best_words[0:number_of_outputs]: - #print(i) - print("\n") - g = (prompt + i) - st.write(g) - l = run_generate(g, "hey") - st.write(l) - -def run_generate(text, bad_words): - yo = [] - input_ids = tokenizer.encode(text, return_tensors='pt') - res = len(tokenizer.encode(text)) - bad_words = bad_words.split() - bad_word_ids = [[7829], [40940]] - for bad_word in bad_words: - bad_word = " " + bad_word - ids = tokenizer(bad_word).input_ids - bad_word_ids.append(ids) - sample_outputs = model.generate( - input_ids, - do_sample=True, - max_length= res + 5, - min_length = res + 5, - top_k=50, - temperature=1.0, - num_return_sequences=3, - bad_words_ids=bad_word_ids - ) - for i in range(3): - e = tokenizer.decode(sample_outputs[i]) - e = e.replace(text, "") - yo.append(e) - print(yo) - return yo - -with st.form(key='my_form'): - prompt = st.text_area(label='Enter sentence', value=g, height=500) - submit_button = st.form_submit_button(label='Submit') - submit_button2 = st.form_submit_button(label='Fast Forward') - submit_button3 = st.form_submit_button(label='Fast Forward 2.0') - submit_button4 = st.form_submit_button(label='Get Top') - - if submit_button: - with torch.no_grad(): - text = tokenizer.encode(prompt) - myinput, past_key_values = torch.tensor([text]), None - myinput = myinput - myinput= myinput.to(device) - logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) - logits = logits[0,-1] - probabilities = torch.nn.functional.softmax(logits) - best_logits, best_indices = logits.topk(log_nums) - best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] - text.append(best_indices[0].item()) - best_probabilities = probabilities[best_indices].tolist() - words = [] - st.write(best_words) - if submit_button2: - print("----") - st.write("___") - m = LogProbs(prompt) - st.write("___") - st.write(m) - st.write("___") - if submit_button3: - print("----") - st.write("___") - st.write(BestProbs) - if submit_button4: - BestProbs5(prompt) \ No newline at end of file diff --git a/spaces/Jose-Alonso26/API-Online/main.py b/spaces/Jose-Alonso26/API-Online/main.py deleted file mode 100644 index 934cb41afead0e2a16aa19c29a7d3cfaa3b564e5..0000000000000000000000000000000000000000 --- a/spaces/Jose-Alonso26/API-Online/main.py +++ /dev/null @@ -1,109 +0,0 @@ -from fastapi import FastAPI, HTTPException -from pydantic import BaseModel -import sqlite3 # Agregar la importación de sqlite3 - -app = FastAPI() - -# Modelo para la creación de un nuevo contacto -class ContactoCreate(BaseModel): - nombre: str - primer_apellido: str - segundo_apellido: str - email: str - telefono: str - -# Conéctate a la base de datos SQLite y crea una tabla para almacenar los contactos si no existe -conn = sqlite3.connect("contactos.db") -cursor = conn.cursor() -cursor.execute(''' - CREATE TABLE IF NOT EXISTS contactos ( - id_contacto INTEGER PRIMARY KEY, - nombre TEXT, - primer_apellido TEXT, - segundo_apellido TEXT, - email TEXT, - telefono TEXT - ) -''') -conn.commit() - -# Función para obtener todos los contactos -def obtener_contactos(): - cursor.execute("SELECT * FROM contactos") - contactos = cursor.fetchall() - return [dict(zip(["id_contacto", "nombre", "primer_apellido", "segundo_apellido", "email", "telefono"], c)) for c in contactos] - -# Resto del código - -# Endpoint para agregar un nuevo contacto -@app.post("/contactos", description="Agregar un nuevo contacto", response_model=dict) -async def agregar_contacto(contacto: ContactoCreate): - try: - cursor.execute(''' - INSERT INTO contactos (nombre, primer_apellido, segundo_apellido, email, telefono) - VALUES (?, ?, ?, ?, ?) - ''', (contacto.nombre, contacto.primer_apellido, contacto.segundo_apellido, contacto.email, contacto.telefono)) - conn.commit() - return contacto.dict() - except Exception as e: - raise HTTPException(status_code=500, detail="Error al agregar el contacto") - -# Endpoint para obtener todos los contactos -@app.get("/contactos", description="Obtener todos los contactos", response_model=list[dict]) -async def get_contactos(): - try: - return obtener_contactos() - except Exception as e: - raise HTTPException(status_code=500, detail="Error al obtener los contactos") - -# Endpoint para actualizar un contacto por id_contacto -@app.put("/contactos/{contacto_id}", description="Actualizar un contacto por su ID", response_model=dict) -@app.patch("/contactos/{contacto_id}", description="Actualizar un contacto por su ID", response_model=dict) -async def actualizar_contacto(contacto_id: int, contacto: ContactoCreate): - try: - cursor.execute(''' - UPDATE contactos - SET nombre = ?, primer_apellido = ?, segundo_apellido = ?, email = ?, telefono = ? - WHERE id_contacto = ? - ''', (contacto.nombre, contacto.primer_apellido, contacto.segundo_apellido, contacto.email, contacto.telefono, contacto_id)) - conn.commit() - - if cursor.rowcount == 0: - raise HTTPException(status_code=404, detail="Contacto no encontrado") - - return { "id_contacto": contacto_id, **contacto.dict() } - - except Exception as e: - raise HTTPException(status_code=500, detail="Error al actualizar el contacto") - -# Endpoint para borrar un contacto por id_contacto -@app.delete("/contactos/{contacto_id}", description="Borrar un contacto por su ID", response_model=dict) -async def borrar_contacto(contacto_id: int): - try: - cursor.execute("SELECT * FROM contactos WHERE id_contacto = ?", (contacto_id,)) - contacto = cursor.fetchone() - - if not contacto: - raise HTTPException(status_code=404, detail="Contacto no encontrado") - - cursor.execute("DELETE FROM contactos WHERE id_contacto = ?", (contacto_id,)) - conn.commit() - - return { "id_contacto": contacto_id, **dict(zip(["nombre", "primer_apellido", "segundo_apellido", "email", "telefono"], contacto)) } - - except Exception as e: - raise HTTPException(status_code=500, detail="Error al borrar el contacto") - -# Endpoint para buscar contactos por nombre -@app.get("/contactos/buscar", description="Buscar contactos por nombre", response_model=list[dict]) -async def buscar_contactos_por_nombre(nombre: str): - try: - cursor.execute('SELECT * FROM contactos WHERE nombre LIKE ?', ('%' + nombre + '%',)) - contactos = cursor.fetchall() - return [dict(zip(["id_contacto", "nombre", "primer_apellido", "segundo_apellido", "email", "telefono"], c)) for c in contactos] - except Exception as e: - raise HTTPException(status_code=500, detail="Error al buscar contactos por nombre") - -if __name__ == "__main__": - import uvicorn - uvicorn.run(app, host="localhost", port=8000) diff --git a/spaces/KAHRAMAN42/youtube_transcript/app.py b/spaces/KAHRAMAN42/youtube_transcript/app.py deleted file mode 100644 index b2af955f26e643804ca5e7f2e6c7a588e60e732d..0000000000000000000000000000000000000000 --- a/spaces/KAHRAMAN42/youtube_transcript/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import gradio as gr -from transformers import pipeline -from youtube_transcript_api import YouTubeTranscriptApi - - -def youtube(url): - video_id=url.split("=")[1] - transcript=YouTubeTranscriptApi.get_transcript(video_id) - result="" - for i in transcript[0:100]: - result+=i["text"]+" "+"\n" - summarization=pipeline("summarization",model="facebook/bart-large-cnn",max_length=60,min_length=10) - list_summarize=summarization(result)[0]["summary_text"].split(". ") - result= ". ".join(list_summarize) - return result - -iface = gr.Interface( - fn=youtube, - inputs="text", - outputs="text", - layout="vertical", - title="Video Özeti Oluşturucu", - description="Bir YouTube videosunun URL'sini girin ve özetini alın." -) - -iface.launch() \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/onnx_inference.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/KazeDevID/RVC-Model/infer_pack/models_onnx_moess.py b/spaces/KazeDevID/RVC-Model/infer_pack/models_onnx_moess.py deleted file mode 100644 index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000 --- a/spaces/KazeDevID/RVC-Model/infer_pack/models_onnx_moess.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Kichkinya/reverseproxynya/README.md b/spaces/Kichkinya/reverseproxynya/README.md deleted file mode 100644 index c87dec7bbd4c35a516c62f8903b23d02014b3a7a..0000000000000000000000000000000000000000 --- a/spaces/Kichkinya/reverseproxynya/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Reverseproxynya -emoji: 💻 -colorFrom: green -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kororinpa/Amadeus_Project/mel_processing.py b/spaces/Kororinpa/Amadeus_Project/mel_processing.py deleted file mode 100644 index 817f03756f64caf8cc54329a9325024c8fb9e0c3..0000000000000000000000000000000000000000 --- a/spaces/Kororinpa/Amadeus_Project/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ld_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ld_head.py deleted file mode 100644 index b5679179c79c8d4e6606e63bd745729c841bef19..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ld_head.py +++ /dev/null @@ -1,257 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Tuple - -import torch -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.structures import SampleList -from mmdet.structures.bbox import bbox_overlaps -from mmdet.utils import ConfigType, InstanceList, OptInstanceList, reduce_mean -from ..utils import multi_apply, unpack_gt_instances -from .gfl_head import GFLHead - - -@MODELS.register_module() -class LDHead(GFLHead): - """Localization distillation Head. (Short description) - - It utilizes the learned bbox distributions to transfer the localization - dark knowledge from teacher to student. Original paper: `Localization - Distillation for Object Detection. `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - loss_ld (:obj:`ConfigDict` or dict): Config of Localization - Distillation Loss (LD), T is the temperature for distillation. - """ - - def __init__(self, - num_classes: int, - in_channels: int, - loss_ld: ConfigType = dict( - type='LocalizationDistillationLoss', - loss_weight=0.25, - T=10), - **kwargs) -> dict: - - super().__init__( - num_classes=num_classes, in_channels=in_channels, **kwargs) - self.loss_ld = MODELS.build(loss_ld) - - def loss_by_feat_single(self, anchors: Tensor, cls_score: Tensor, - bbox_pred: Tensor, labels: Tensor, - label_weights: Tensor, bbox_targets: Tensor, - stride: Tuple[int], soft_targets: Tensor, - avg_factor: int): - """Calculate the loss of a single scale level based on the features - extracted by the detection head. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - soft_targets (Tensor): Soft BBox regression targets. - avg_factor (int): Average factor that is used to average - the loss. When using sampling method, avg_factor is usually - the sum of positive and negative priors. When using - `PseudoSampler`, `avg_factor` is usually equal to the number - of positive priors. - - Returns: - dict[tuple, Tensor]: Loss components and weight targets. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - soft_targets = soft_targets.permute(0, 2, 3, - 1).reshape(-1, - 4 * (self.reg_max + 1)) - - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchor_centers, pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - pos_soft_targets = soft_targets[pos_inds] - soft_corners = pos_soft_targets.reshape(-1, self.reg_max + 1) - - target_corners = self.bbox_coder.encode(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - # ld loss - loss_ld = self.loss_ld( - pred_corners, - soft_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - else: - loss_ld = bbox_pred.sum() * 0 - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=avg_factor) - - return loss_cls, loss_bbox, loss_dfl, loss_ld, weight_targets.sum() - - def loss(self, x: List[Tensor], out_teacher: Tuple[Tensor], - batch_data_samples: SampleList) -> dict: - """ - Args: - x (list[Tensor]): Features from FPN. - out_teacher (tuple[Tensor]): The output of teacher. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - - Returns: - tuple[dict, list]: The loss components and proposals of each image. - - - losses (dict[str, Tensor]): A dictionary of loss components. - - proposal_list (list[Tensor]): Proposals of each image. - """ - outputs = unpack_gt_instances(batch_data_samples) - batch_gt_instances, batch_gt_instances_ignore, batch_img_metas \ - = outputs - - outs = self(x) - soft_targets = out_teacher[1] - loss_inputs = outs + (batch_gt_instances, batch_img_metas, - soft_targets) - losses = self.loss_by_feat( - *loss_inputs, batch_gt_instances_ignore=batch_gt_instances_ignore) - - return losses - - def loss_by_feat( - self, - cls_scores: List[Tensor], - bbox_preds: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - soft_targets: List[Tensor], - batch_gt_instances_ignore: OptInstanceList = None) -> dict: - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - soft_targets (list[Tensor]): Soft BBox regression targets. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], Optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, batch_img_metas, device=device) - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - batch_gt_instances, - batch_img_metas, - batch_gt_instances_ignore=batch_gt_instances_ignore) - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, avg_factor) = cls_reg_targets - - avg_factor = reduce_mean( - torch.tensor(avg_factor, dtype=torch.float, device=device)).item() - - losses_cls, losses_bbox, losses_dfl, losses_ld, \ - avg_factor = multi_apply( - self.loss_by_feat_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.prior_generator.strides, - soft_targets, - avg_factor=avg_factor) - - avg_factor = sum(avg_factor) + 1e-6 - avg_factor = reduce_mean(avg_factor).item() - losses_bbox = [x / avg_factor for x in losses_bbox] - losses_dfl = [x / avg_factor for x in losses_dfl] - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_dfl=losses_dfl, - loss_ld=losses_ld) diff --git a/spaces/Lamai/LAMAIGPT/.devcontainer/Dockerfile b/spaces/Lamai/LAMAIGPT/.devcontainer/Dockerfile deleted file mode 100644 index 02f580a02e11f3d711350448c6f5d17f4f74b8c1..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/.devcontainer/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -# [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3-bullseye, 3.10-bullseye, 3-buster, 3.10-buster -ARG VARIANT=3-bullseye -FROM --platform=linux/amd64 python:3.10 - -RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ - # Remove imagemagick due to https://security-tracker.debian.org/tracker/CVE-2019-10131 - && apt-get purge -y imagemagick imagemagick-6-common - -# Temporary: Upgrade python packages due to https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40897 -# They are installed by the base image (python) which does not have the patch. -RUN python3 -m pip install --upgrade setuptools - -# Install Chrome for web browsing -RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ - && curl -sSL https://dl.google.com/linux/direct/google-chrome-stable_current_$(dpkg --print-architecture).deb -o /tmp/chrome.deb \ - && apt-get -y install /tmp/chrome.deb - -# [Optional] If your pip requirements rarely change, uncomment this section to add them to the image. -# COPY requirements.txt /tmp/pip-tmp/ -# RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \ -# && rm -rf /tmp/pip-tmp - -# [Optional] Uncomment this section to install additional OS packages. -# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ -# && apt-get -y install --no-install-recommends - -# [Optional] Uncomment this line to install global node packages. -# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g " 2>&1 diff --git a/spaces/Lamai/LAMAIGPT/tests/test_json_parser.py b/spaces/Lamai/LAMAIGPT/tests/test_json_parser.py deleted file mode 100644 index 41c90a6f66c0b0468f1443de80033cc4f268eca0..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/test_json_parser.py +++ /dev/null @@ -1,111 +0,0 @@ -import unittest - -import tests.context -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/MOULI17/CmGenAIChatbot/app.py b/spaces/MOULI17/CmGenAIChatbot/app.py deleted file mode 100644 index d4e8c682c1696197371641afad14c940b3d5ab15..0000000000000000000000000000000000000000 --- a/spaces/MOULI17/CmGenAIChatbot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a tech-savvy computer science student who spends countless hours coding, building apps, and keeping up with the latest tech trends. You enjoy discussing programming languages, AI, and gadgets and are always ready to troubleshoot tech-related problems. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Manmay/tortoise-tts/tortoise/utils/tokenizer.py b/spaces/Manmay/tortoise-tts/tortoise/utils/tokenizer.py deleted file mode 100644 index 922f23ec2ae383abb653ca01d5b6c008a3b0b5fe..0000000000000000000000000000000000000000 --- a/spaces/Manmay/tortoise-tts/tortoise/utils/tokenizer.py +++ /dev/null @@ -1,194 +0,0 @@ -import os -import re - -import inflect -import torch -from tokenizers import Tokenizer - - -# Regular expression matching whitespace: -from unidecode import unidecode - -_whitespace_re = re.compile(r'\s+') - - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterate to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including number and abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - text = text.replace('"', '') - return text - - -def lev_distance(s1, s2): - if len(s1) > len(s2): - s1, s2 = s2, s1 - - distances = range(len(s1) + 1) - for i2, c2 in enumerate(s2): - distances_ = [i2 + 1] - for i1, c1 in enumerate(s1): - if c1 == c2: - distances_.append(distances[i1]) - else: - distances_.append(1 + min((distances[i1], distances[i1 + 1], distances_[-1]))) - distances = distances_ - return distances[-1] - - -DEFAULT_VOCAB_FILE = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../data/tokenizer.json') - - -class VoiceBpeTokenizer: - def __init__(self, vocab_file=None, use_basic_cleaners=False): - self.tokenizer = Tokenizer.from_file( - DEFAULT_VOCAB_FILE if vocab_file is None else vocab_file - ) - if use_basic_cleaners: - self.preprocess_text = basic_cleaners - else: - self.preprocess_text = english_cleaners - - def encode(self, txt): - txt = self.preprocess_text(txt) - txt = txt.replace(' ', '[SPACE]') - return self.tokenizer.encode(txt).ids - - def decode(self, seq): - if isinstance(seq, torch.Tensor): - seq = seq.cpu().numpy() - txt = self.tokenizer.decode(seq, skip_special_tokens=False).replace(' ', '') - txt = txt.replace('[SPACE]', ' ') - txt = txt.replace('[STOP]', '') - txt = txt.replace('[UNK]', '') - return txt diff --git a/spaces/Marshalls/testmtd/feature_extraction/convert_video_to_audio_parallel.sh b/spaces/Marshalls/testmtd/feature_extraction/convert_video_to_audio_parallel.sh deleted file mode 100644 index d6a9244f0e3aad014a473cb2e0d0c036c76cc88d..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/convert_video_to_audio_parallel.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash - -find videos/ -type f -name '*.mp4' -print0 | parallel -0 ffmpeg -i {} {.}.wav diff --git a/spaces/MirageML/lowpoly-town/README.md b/spaces/MirageML/lowpoly-town/README.md deleted file mode 100644 index dadbe15b73d12560fcddf761fdabb7f93c71324e..0000000000000000000000000000000000000000 --- a/spaces/MirageML/lowpoly-town/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Lowpoly Town -emoji: 👁 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Miuzarte/SUI-svc-4.0/modules/losses.py b/spaces/Miuzarte/SUI-svc-4.0/modules/losses.py deleted file mode 100644 index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-4.0/modules/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import modules.commons as commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - #print(logs_p) - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/sdmgr.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/sdmgr.py deleted file mode 100644 index 670dcdf59827ffb2ea3926474cddbdef76bdb105..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/sdmgr.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Dict, List, Optional, Sequence, Tuple - -import torch -from mmdet.structures.bbox import bbox2roi -from mmengine.model import BaseModel -from torch import nn - -from mmocr.registry import MODELS, TASK_UTILS -from mmocr.structures import KIEDataSample - - -@MODELS.register_module() -class SDMGR(BaseModel): - """The implementation of the paper: Spatial Dual-Modality Graph Reasoning - for Key Information Extraction. https://arxiv.org/abs/2103.14470. - - Args: - backbone (dict, optional): Config of backbone. If None, None will be - passed to kie_head during training and testing. Defaults to None. - roi_extractor (dict, optional): Config of roi extractor. Only - applicable when backbone is not None. Defaults to None. - neck (dict, optional): Config of neck. Defaults to None. - kie_head (dict): Config of KIE head. Defaults to None. - dictionary (dict, optional): Config of dictionary. Defaults to None. - data_preprocessor (dict or ConfigDict, optional): The pre-process - config of :class:`BaseDataPreprocessor`. it usually includes, - ``pad_size_divisor``, ``pad_value``, ``mean`` and ``std``. It has - to be None when working in non-visual mode. Defaults to None. - init_cfg (dict or list[dict], optional): Initialization configs. - Defaults to None. - """ - - def __init__(self, - backbone: Optional[Dict] = None, - roi_extractor: Optional[Dict] = None, - neck: Optional[Dict] = None, - kie_head: Dict = None, - dictionary: Optional[Dict] = None, - data_preprocessor: Optional[Dict] = None, - init_cfg: Optional[Dict] = None) -> None: - super().__init__( - data_preprocessor=data_preprocessor, init_cfg=init_cfg) - if dictionary is not None: - self.dictionary = TASK_UTILS.build(dictionary) - if kie_head.get('dictionary', None) is None: - kie_head.update(dictionary=self.dictionary) - else: - warnings.warn(f"Using dictionary {kie_head['dictionary']} " - "in kie_head's config.") - if backbone is not None: - self.backbone = MODELS.build(backbone) - self.extractor = MODELS.build({ - **roi_extractor, 'out_channels': - self.backbone.base_channels - }) - self.maxpool = nn.MaxPool2d( - roi_extractor['roi_layer']['output_size']) - if neck is not None: - self.neck = MODELS.build(neck) - self.kie_head = MODELS.build(kie_head) - - def extract_feat(self, img: torch.Tensor, - gt_bboxes: List[torch.Tensor]) -> torch.Tensor: - """Extract features from images if self.backbone is not None. It - returns None otherwise. - - Args: - img (torch.Tensor): The input image with shape (N, C, H, W). - gt_bboxes (list[torch.Tensor)): A list of ground truth bounding - boxes, each of shape :math:`(N_i, 4)`. - - Returns: - torch.Tensor: The extracted features with shape (N, E). - """ - if not hasattr(self, 'backbone'): - return None - x = self.backbone(img) - if hasattr(self, 'neck'): - x = self.neck(x) - x = x[-1] - feats = self.maxpool(self.extractor([x], bbox2roi(gt_bboxes))) - return feats.view(feats.size(0), -1) - - def forward(self, - inputs: torch.Tensor, - data_samples: Sequence[KIEDataSample] = None, - mode: str = 'tensor', - **kwargs) -> torch.Tensor: - """The unified entry for a forward process in both training and test. - - The method should accept three modes: "tensor", "predict" and "loss": - - - "tensor": Forward the whole network and return tensor or tuple of - tensor without any post-processing, same as a common nn.Module. - - "predict": Forward and return the predictions, which are fully - processed to a list of :obj:`DetDataSample`. - - "loss": Forward and return a dict of losses according to the given - inputs and data samples. - - Note that this method doesn't handle neither back propagation nor - optimizer updating, which are done in the :meth:`train_step`. - - Args: - inputs (torch.Tensor): The input tensor with shape - (N, C, ...) in general. - data_samples (list[:obj:`DetDataSample`], optional): The - annotation data of every samples. Defaults to None. - mode (str): Return what kind of value. Defaults to 'tensor'. - - Returns: - The return type depends on ``mode``. - - - If ``mode="tensor"``, return a tensor or a tuple of tensor. - - If ``mode="predict"``, return a list of :obj:`DetDataSample`. - - If ``mode="loss"``, return a dict of tensor. - """ - if mode == 'loss': - return self.loss(inputs, data_samples, **kwargs) - elif mode == 'predict': - return self.predict(inputs, data_samples, **kwargs) - elif mode == 'tensor': - return self._forward(inputs, data_samples, **kwargs) - else: - raise RuntimeError(f'Invalid mode "{mode}". ' - 'Only supports loss, predict and tensor mode') - - def loss(self, inputs: torch.Tensor, data_samples: Sequence[KIEDataSample], - **kwargs) -> dict: - """Calculate losses from a batch of inputs and data samples. - - Args: - inputs (torch.Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - data_samples (list[KIEDataSample]): A list of N datasamples, - containing meta information and gold annotations for each of - the images. - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - x = self.extract_feat( - inputs, - [data_sample.gt_instances.bboxes for data_sample in data_samples]) - return self.kie_head.loss(x, data_samples) - - def predict(self, inputs: torch.Tensor, - data_samples: Sequence[KIEDataSample], - **kwargs) -> List[KIEDataSample]: - """Predict results from a batch of inputs and data samples with post- - processing. - Args: - inputs (torch.Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - data_samples (list[KIEDataSample]): A list of N datasamples, - containing meta information and gold annotations for each of - the images. - - Returns: - List[KIEDataSample]: A list of datasamples of prediction results. - Results are stored in ``pred_instances.labels`` and - ``pred_instances.edge_labels``. - """ - x = self.extract_feat( - inputs, - [data_sample.gt_instances.bboxes for data_sample in data_samples]) - return self.kie_head.predict(x, data_samples) - - def _forward(self, inputs: torch.Tensor, - data_samples: Sequence[KIEDataSample], - **kwargs) -> Tuple[torch.Tensor, torch.Tensor]: - """Get the raw tensor outputs from backbone and head without any post- - processing. - - Args: - inputs (torch.Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - data_samples (list[KIEDataSample]): A list of N datasamples, - containing meta information and gold annotations for each of - the images. - - Returns: - tuple(torch.Tensor, torch.Tensor): Tensor output from head. - - - node_cls (torch.Tensor): Node classification output. - - edge_cls (torch.Tensor): Edge classification output. - """ - x = self.extract_feat( - inputs, - [data_sample.gt_instances.bboxes for data_sample in data_samples]) - return self.kie_head(x, data_samples) diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/dbnet.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/dbnet.py deleted file mode 100644 index eed95b9fabd24ff17ffcba05fb814c0f1cdc9b42..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/dbnet.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmocr.registry import MODELS -from .single_stage_text_detector import SingleStageTextDetector - - -@MODELS.register_module() -class DBNet(SingleStageTextDetector): - """The class for implementing DBNet text detector: Real-time Scene Text - Detection with Differentiable Binarization. - - [https://arxiv.org/abs/1911.08947]. - """ diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/train.py b/spaces/Mountchicken/MAERec-Gradio/tools/train.py deleted file mode 100644 index 349cbb3ef4fc74c821c9bd4266ecacdd6acd8cc2..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/train.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import logging -import os -import os.path as osp - -from mmengine.config import Config, DictAction -from mmengine.logging import print_log -from mmengine.registry import RUNNERS -from mmengine.runner import Runner - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a model') - parser.add_argument('config', help='Train config file path') - parser.add_argument('--work-dir', help='The dir to save logs and models') - parser.add_argument( - '--resume', action='store_true', help='Whether to resume checkpoint.') - parser.add_argument( - '--amp', - action='store_true', - default=False, - help='Enable automatic-mixed-precision training') - parser.add_argument( - '--auto-scale-lr', - action='store_true', - help='Whether to scale the learning rate automatically. It requires ' - '`auto_scale_lr` in config, and `base_batch_size` in `auto_scale_lr`') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='Override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='Job launcher') - # When using PyTorch version >= 2.0.0, the `torch.distributed.launch` - # will pass the `--local-rank` parameter to `tools/train.py` instead - # of `--local_rank`. - parser.add_argument('--local_rank', '--local-rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - return args - - -def main(): - args = parse_args() - # load config - cfg = Config.fromfile(args.config) - cfg.launcher = args.launcher - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - # enable automatic-mixed-precision training - if args.amp: - optim_wrapper = cfg.optim_wrapper.type - if optim_wrapper == 'AmpOptimWrapper': - print_log( - 'AMP training is already enabled in your config.', - logger='current', - level=logging.WARNING) - else: - assert optim_wrapper == 'OptimWrapper', ( - '`--amp` is only supported when the optimizer wrapper type is ' - f'`OptimWrapper` but got {optim_wrapper}.') - cfg.optim_wrapper.type = 'AmpOptimWrapper' - cfg.optim_wrapper.loss_scale = 'dynamic' - - if args.resume: - cfg.resume = True - - # enable automatically scaling LR - if args.auto_scale_lr: - if 'auto_scale_lr' in cfg and \ - 'base_batch_size' in cfg.auto_scale_lr: - cfg.auto_scale_lr.enable = True - else: - raise RuntimeError('Can not find "auto_scale_lr" or ' - '"auto_scale_lr.base_batch_size" in your' - ' configuration file.') - - # build the runner from config - if 'runner_type' not in cfg: - # build the default runner - runner = Runner.from_cfg(cfg) - else: - # build customized runner from the registry - # if 'runner_type' is set in the cfg - runner = RUNNERS.build(cfg) - - # start training - runner.train() - - -if __name__ == '__main__': - main() diff --git a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_scale.py b/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_scale.py deleted file mode 100644 index 610db6b480f2482477c6112103cbeeb1e98304da..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_scale.py +++ /dev/null @@ -1,86 +0,0 @@ -import numpy as np -from .boxes_utils import assert_and_normalize_shape - - -def scale_boxes(boxes, x_scale=1, y_scale=1, x_center=0, y_center=0, copy=True): - """Scale boxes coordinates in x and y dimensions. - - Args: - boxes: (N, 4+K) - x_scale: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale factor in x dimension - y_scale: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale factor in y dimension - x_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - y_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - - References: - `core.box_list_ops.scale` in TensorFlow object detection API - `utils.box_list_ops.scale` in TensorFlow object detection API - `datasets.pipelines.Resize._resize_bboxes` in mmdetection - `core.anchor.guided_anchor_target.calc_region` in mmdetection where comments may be misleading! - `layers.mask_ops.scale_boxes` in detectron2 - `mmcv.bbox_scaling` - """ - boxes = np.array(boxes, dtype=np.float32, copy=copy) - - x_scale = np.asarray(x_scale, np.float32) - y_scale = np.asarray(y_scale, np.float32) - x_scale = assert_and_normalize_shape(x_scale, boxes.shape[0]) - y_scale = assert_and_normalize_shape(y_scale, boxes.shape[0]) - - x_center = np.asarray(x_center, np.float32) - y_center = np.asarray(y_center, np.float32) - x_center = assert_and_normalize_shape(x_center, boxes.shape[0]) - y_center = assert_and_normalize_shape(y_center, boxes.shape[0]) - - x_shift = 1 - x_scale - y_shift = 1 - y_scale - x_shift *= x_center - y_shift *= y_center - - boxes[:, 0] *= x_scale - boxes[:, 1] *= y_scale - boxes[:, 2] *= x_scale - boxes[:, 3] *= y_scale - boxes[:, 0] += x_shift - boxes[:, 1] += y_shift - boxes[:, 2] += x_shift - boxes[:, 3] += y_shift - return boxes - - -def scale_boxes_wrt_centers(boxes, x_scale=1, y_scale=1, copy=True): - """ - Args: - boxes: (N, 4+K) - x_scale: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale factor in x dimension - y_scale: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale factor in y dimension - - References: - `core.anchor.guided_anchor_target.calc_region` in mmdetection where comments may be misleading! - `layers.mask_ops.scale_boxes` in detectron2 - `mmcv.bbox_scaling` - """ - boxes = np.array(boxes, dtype=np.float32, copy=copy) - - x_scale = np.asarray(x_scale, np.float32) - y_scale = np.asarray(y_scale, np.float32) - x_scale = assert_and_normalize_shape(x_scale, boxes.shape[0]) - y_scale = assert_and_normalize_shape(y_scale, boxes.shape[0]) - - x_factor = (x_scale - 1) * 0.5 - y_factor = (y_scale - 1) * 0.5 - x_deltas = boxes[:, 2] - boxes[:, 0] - y_deltas = boxes[:, 3] - boxes[:, 1] - x_deltas *= x_factor - y_deltas *= y_factor - - boxes[:, 0] -= x_deltas - boxes[:, 1] -= y_deltas - boxes[:, 2] += x_deltas - boxes[:, 3] += y_deltas - return boxes - diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/common/rollout_test.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/common/rollout_test.py deleted file mode 100644 index 5be4cb0fafd8a2e94004c17b41e189d989a3a851..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/common/rollout_test.py +++ /dev/null @@ -1,129 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -"""Tests for common.rollout.""" - -import numpy as np -import tensorflow as tf - -from common import rollout as rollout_lib # brain coder - - -class RolloutTest(tf.test.TestCase): - - def MakeRollout(self, states, actions, rewards, values=None, terminated=True): - rollout = rollout_lib.Rollout() - rollout.add_many( - states=states, actions=actions, rewards=rewards, values=values, - terminated=terminated) - return rollout - - def testDiscount(self): - discounted = np.array([1.0 / 2 ** n for n in range(4, -1, -1)]) - discounted[:2] += [1.0 / 2 ** n for n in range(1, -1, -1)] - - self.assertTrue(np.array_equal( - rollout_lib.discount([0.0, 1.0, 0.0, 0.0, 1.0], 0.50), - discounted)) - self.assertTrue(np.array_equal( - rollout_lib.discount(np.array([0.0, 1.0, 0.0, 0.0, 1.0]), 0.50), - discounted)) - - def testDiscountedAdvantageAndRewards(self): - # lambda=1, No bootstrapping. - values = [0.1, 0.5, 0.5, 0.25] - (empirical_values, - generalized_advantage) = rollout_lib.discounted_advantage_and_rewards( - [0.0, 0.0, 0.0, 1.0], - values, - gamma=0.75, - lambda_=1.0) - expected_discounted_r = ( - np.array([1.0 * 0.75 ** n for n in range(3, -1, -1)])) - expected_adv = expected_discounted_r - values - self.assertTrue(np.array_equal(empirical_values, expected_discounted_r)) - self.assertTrue(np.allclose(generalized_advantage, expected_adv)) - - # lambda=1, With bootstrapping. - values = [0.1, 0.5, 0.5, 0.25, 0.75] - (empirical_values, - generalized_advantage) = rollout_lib.discounted_advantage_and_rewards( - [0.0, 0.0, 0.0, 1.0], - values, - gamma=0.75, - lambda_=1.0) - expected_discounted_r = ( - np.array([0.75 * 0.75 ** n for n in range(4, 0, -1)]) - + np.array([1.0 * 0.75 ** n for n in range(3, -1, -1)])) - expected_adv = expected_discounted_r - values[:-1] - self.assertTrue(np.array_equal(empirical_values, expected_discounted_r)) - self.assertTrue(np.allclose(generalized_advantage, expected_adv)) - - # lambda=0.5, With bootstrapping. - values = [0.1, 0.5, 0.5, 0.25, 0.75] - rewards = [0.0, 0.0, 0.0, 1.0] - l = 0.5 # lambda - g = 0.75 # gamma - (empirical_values, - generalized_advantage) = rollout_lib.discounted_advantage_and_rewards( - rewards, - values, - gamma=g, - lambda_=l) - expected_discounted_r = ( - np.array([0.75 * g ** n for n in range(4, 0, -1)]) - + np.array([1.0 * g ** n for n in range(3, -1, -1)])) - expected_adv = [0.0] * len(values) - for t in range(3, -1, -1): - delta_t = rewards[t] + g * values[t + 1] - values[t] - expected_adv[t] = delta_t + g * l * expected_adv[t + 1] - expected_adv = expected_adv[:-1] - self.assertTrue(np.array_equal(empirical_values, expected_discounted_r)) - self.assertTrue(np.allclose(generalized_advantage, expected_adv)) - - def testProcessRollouts(self): - g = 0.95 - rollouts = [ - self.MakeRollout( - states=[3, 6, 9], - actions=[1, 2, 3], - rewards=[1.0, -1.0, 0.5], - values=[0.5, 0.5, 0.1]), - self.MakeRollout( - states=[10], - actions=[5], - rewards=[1.0], - values=[0.5])] - batch = rollout_lib.process_rollouts(rollouts, gamma=g) - - self.assertEqual(2, batch.batch_size) - self.assertEqual(3, batch.max_time) - self.assertEqual([3, 1], batch.episode_lengths) - self.assertEqual([0.5, 1.0], batch.total_rewards) - self.assertEqual( - [[3, 6, 9], [10, 0, 0]], - batch.states.tolist()) - self.assertEqual( - [[1, 2, 3], [5, 0, 0]], - batch.actions.tolist()) - - rew1, rew2 = rollouts[0].rewards, rollouts[1].rewards - expected_discounted_rewards = [ - [rew1[0] + g * rew1[1] + g * g * rew1[2], - rew1[1] + g * rew1[2], - rew1[2]], - [rew2[0], 0.0, 0.0]] - expected_advantages = [ - [dr - v - for dr, v - in zip(expected_discounted_rewards[0], rollouts[0].values)], - [expected_discounted_rewards[1][0] - rollouts[1].values[0], 0.0, 0.0]] - self.assertTrue( - np.allclose(expected_discounted_rewards, batch.discounted_r)) - self.assertTrue( - np.allclose(expected_advantages, batch.discounted_adv)) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NN520/AI/src/components/ui/sheet.tsx b/spaces/NN520/AI/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/hparams.py b/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/hparams.py deleted file mode 100644 index 9a8c16471903b0c92253b1d70fcd6a61d10e085f..0000000000000000000000000000000000000000 --- a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/hparams.py +++ /dev/null @@ -1,31 +0,0 @@ -## Mel-filterbank -mel_window_length = 25 # In milliseconds -mel_window_step = 10 # In milliseconds -mel_n_channels = 40 - - -## Audio -sampling_rate = 16000 -# Number of spectrogram frames in a partial utterance -partials_n_frames = 160 # 1600 ms - - -## Voice Activation Detection -# Window size of the VAD. Must be either 10, 20 or 30 milliseconds. -# This sets the granularity of the VAD. Should not need to be changed. -vad_window_length = 30 # In milliseconds -# Number of frames to average together when performing the moving average smoothing. -# The larger this value, the larger the VAD variations must be to not get smoothed out. -vad_moving_average_width = 8 -# Maximum number of consecutive silent frames a segment can have. -vad_max_silence_length = 6 - - -## Audio volume normalization -audio_norm_target_dBFS = -30 - - -## Model parameters -model_hidden_size = 256 -model_embedding_size = 256 -model_num_layers = 3 \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/commonsense_qa/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/commonsense_qa/__init__.py deleted file mode 100644 index 42d21f35eb3dd33a053dcf0edd5eadd2dff11294..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/commonsense_qa/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import commonsense_qa_task # noqa diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/scoring/bleu.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/scoring/bleu.py deleted file mode 100644 index 97de5f966ec08e5a304c41358e67755c601622b7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/scoring/bleu.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ctypes -import math -import sys -from dataclasses import dataclass, field - -import torch -from fairseq.dataclass import FairseqDataclass -from fairseq.scoring import BaseScorer, register_scorer -from fairseq.scoring.tokenizer import EvaluationTokenizer - - -class BleuStat(ctypes.Structure): - _fields_ = [ - ("reflen", ctypes.c_size_t), - ("predlen", ctypes.c_size_t), - ("match1", ctypes.c_size_t), - ("count1", ctypes.c_size_t), - ("match2", ctypes.c_size_t), - ("count2", ctypes.c_size_t), - ("match3", ctypes.c_size_t), - ("count3", ctypes.c_size_t), - ("match4", ctypes.c_size_t), - ("count4", ctypes.c_size_t), - ] - - -@dataclass -class SacrebleuConfig(FairseqDataclass): - sacrebleu_tokenizer: EvaluationTokenizer.ALL_TOKENIZER_TYPES = field( - default="13a", metadata={"help": "tokenizer"} - ) - sacrebleu_lowercase: bool = field( - default=False, metadata={"help": "apply lowercasing"} - ) - sacrebleu_char_level: bool = field( - default=False, metadata={"help": "evaluate at character level"} - ) - - -@register_scorer("sacrebleu", dataclass=SacrebleuConfig) -class SacrebleuScorer(BaseScorer): - def __init__(self, cfg): - super(SacrebleuScorer, self).__init__(cfg) - import sacrebleu - - self.sacrebleu = sacrebleu - self.tokenizer = EvaluationTokenizer( - tokenizer_type=cfg.sacrebleu_tokenizer, - lowercase=cfg.sacrebleu_lowercase, - character_tokenization=cfg.sacrebleu_char_level, - ) - - def add_string(self, ref, pred): - self.ref.append(self.tokenizer.tokenize(ref)) - self.pred.append(self.tokenizer.tokenize(pred)) - - def score(self, order=4): - return self.result_string(order).score - - def result_string(self, order=4): - if order != 4: - raise NotImplementedError - # tokenization and lowercasing are performed by self.tokenizer instead. - return self.sacrebleu.corpus_bleu( - self.pred, [self.ref], tokenize="none" - ).format() - - -@dataclass -class BleuConfig(FairseqDataclass): - pad: int = field(default=1, metadata={"help": "padding index"}) - eos: int = field(default=2, metadata={"help": "eos index"}) - unk: int = field(default=3, metadata={"help": "unk index"}) - - -@register_scorer("bleu", dataclass=BleuConfig) -class Scorer(object): - def __init__(self, cfg): - self.stat = BleuStat() - self.pad = cfg.pad - self.eos = cfg.eos - self.unk = cfg.unk - - try: - from fairseq import libbleu - except ImportError as e: - sys.stderr.write( - "ERROR: missing libbleu.so. run `pip install --editable .`\n" - ) - raise e - - self.C = ctypes.cdll.LoadLibrary(libbleu.__file__) - - self.reset() - - def reset(self, one_init=False): - if one_init: - self.C.bleu_one_init(ctypes.byref(self.stat)) - else: - self.C.bleu_zero_init(ctypes.byref(self.stat)) - - def add(self, ref, pred): - if not isinstance(ref, torch.IntTensor): - raise TypeError("ref must be a torch.IntTensor (got {})".format(type(ref))) - if not isinstance(pred, torch.IntTensor): - raise TypeError("pred must be a torch.IntTensor(got {})".format(type(pred))) - - # don't match unknown words - rref = ref.clone() - assert not rref.lt(0).any() - rref[rref.eq(self.unk)] = -999 - - rref = rref.contiguous().view(-1) - pred = pred.contiguous().view(-1) - - self.C.bleu_add( - ctypes.byref(self.stat), - ctypes.c_size_t(rref.size(0)), - ctypes.c_void_p(rref.data_ptr()), - ctypes.c_size_t(pred.size(0)), - ctypes.c_void_p(pred.data_ptr()), - ctypes.c_int(self.pad), - ctypes.c_int(self.eos), - ) - - def score(self, order=4): - psum = sum( - math.log(p) if p > 0 else float("-Inf") for p in self.precision()[:order] - ) - return self.brevity() * math.exp(psum / order) * 100 - - def precision(self): - def ratio(a, b): - return a / b if b > 0 else 0 - - return [ - ratio(self.stat.match1, self.stat.count1), - ratio(self.stat.match2, self.stat.count2), - ratio(self.stat.match3, self.stat.count3), - ratio(self.stat.match4, self.stat.count4), - ] - - def brevity(self): - r = self.stat.reflen / self.stat.predlen - return min(1, math.exp(1 - r)) - - def result_string(self, order=4): - assert order <= 4, "BLEU scores for order > 4 aren't supported" - fmt = "BLEU{} = {:2.2f}, {:2.1f}" - for _ in range(1, order): - fmt += "/{:2.1f}" - fmt += " (BP={:.3f}, ratio={:.3f}, syslen={}, reflen={})" - bleup = [p * 100 for p in self.precision()[:order]] - return fmt.format( - order, - self.score(order=order), - *bleup, - self.brevity(), - self.stat.predlen / self.stat.reflen, - self.stat.predlen, - self.stat.reflen - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py deleted file mode 100644 index 44f7989bd863329f763aa62b78df2eb42b3084ea..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch.nn as nn -from fairseq.models.transformer import TransformerEncoder - -from .linformer_sentence_encoder_layer import LinformerTransformerEncoderLayer - - -class LinformerTransformerEncoder(TransformerEncoder): - """ - Implementation for a Bi-directional Linformer based Sentence Encoder used - in BERT/XLM style pre-trained models. - - This first computes the token embedding using the token embedding matrix, - position embeddings (if specified) and segment embeddings - (if specified). After applying the specified number of - LinformerEncoderLayers, it outputs all the internal states of the - encoder as well as the final representation associated with the first - token (usually CLS token). - - Input: - - tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - - Output: - - a tuple of the following: - - a list of internal model states used to compute the - predictions where each tensor has shape T x B x C - - sentence representation associated with first input token - in format B x C. - """ - - def __init__(self, args, dictionary, embed_tokens): - self.compress_layer = None - super().__init__(args, dictionary, embed_tokens) - - def build_encoder_layer(self, args): - if self.args.shared_layer_kv_compressed == 1 and self.compress_layer is None: - compress_layer = nn.Linear( - self.args.max_positions, - self.args.max_positions // self.args.compressed, - ) - # intialize parameters for compressed layer - nn.init.xavier_uniform_(compress_layer.weight, gain=1 / math.sqrt(2)) - if self.args.freeze_compress == 1: - compress_layer.weight.requires_grad = False - self.compress_layer = compress_layer - - return LinformerTransformerEncoderLayer(args, self.compress_layer) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/README.md deleted file mode 100644 index ed4d5df52ccea01216276054a1f253d0d16c0409..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/README.md +++ /dev/null @@ -1,296 +0,0 @@ -# RoBERTa: A Robustly Optimized BERT Pretraining Approach - -https://arxiv.org/abs/1907.11692 - -## Introduction - -RoBERTa iterates on BERT's pretraining procedure, including training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data. See the associated paper for more details. - -### What's New: - -- December 2020: German model (GottBERT) is available: [GottBERT](https://github.com/pytorch/fairseq/tree/main/examples/gottbert). -- January 2020: Italian model (UmBERTo) is available from Musixmatch Research: [UmBERTo](https://github.com/musixmatchresearch/umberto). -- November 2019: French model (CamemBERT) is available: [CamemBERT](https://github.com/pytorch/fairseq/tree/main/examples/camembert). -- November 2019: Multilingual encoder (XLM-RoBERTa) is available: [XLM-R](https://github.com/pytorch/fairseq/tree/main/examples/xlmr). -- September 2019: TensorFlow and TPU support via the [transformers library](https://github.com/huggingface/transformers). -- August 2019: RoBERTa is now supported in the [pytorch-transformers library](https://github.com/huggingface/pytorch-transformers). -- August 2019: Added [tutorial for finetuning on WinoGrande](https://github.com/pytorch/fairseq/tree/main/examples/roberta/wsc#roberta-training-on-winogrande-dataset). -- August 2019: Added [tutorial for pretraining RoBERTa using your own data](README.pretraining.md). - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`roberta.base` | RoBERTa using the BERT-base architecture | 125M | [roberta.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz) -`roberta.large` | RoBERTa using the BERT-large architecture | 355M | [roberta.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz) -`roberta.large.mnli` | `roberta.large` finetuned on [MNLI](http://www.nyu.edu/projects/bowman/multinli) | 355M | [roberta.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.mnli.tar.gz) -`roberta.large.wsc` | `roberta.large` finetuned on [WSC](wsc/README.md) | 355M | [roberta.large.wsc.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.wsc.tar.gz) - -## Results - -**[GLUE (Wang et al., 2019)](https://gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B ----|---|---|---|---|---|---|---|--- -`roberta.base` | 87.6 | 92.8 | 91.9 | 78.7 | 94.8 | 90.2 | 63.6 | 91.2 -`roberta.large` | 90.2 | 94.7 | 92.2 | 86.6 | 96.4 | 90.9 | 68.0 | 92.4 -`roberta.large.mnli` | 90.2 | - | - | - | - | - | - | - - -**[SuperGLUE (Wang et al., 2019)](https://super.gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | BoolQ | CB | COPA | MultiRC | RTE | WiC | WSC ----|---|---|---|---|---|---|--- -`roberta.large` | 86.9 | 98.2 | 94.0 | 85.7 | 89.5 | 75.6 | - -`roberta.large.wsc` | - | - | - | - | - | - | 91.3 - -**[SQuAD (Rajpurkar et al., 2018)](https://rajpurkar.github.io/SQuAD-explorer/)** -_(dev set, no additional data used)_ - -Model | SQuAD 1.1 EM/F1 | SQuAD 2.0 EM/F1 ----|---|--- -`roberta.large` | 88.9/94.6 | 86.5/89.4 - -**[RACE (Lai et al., 2017)](http://www.qizhexie.com/data/RACE_leaderboard.html)** -_(test set)_ - -Model | Accuracy | Middle | High ----|---|---|--- -`roberta.large` | 83.2 | 86.5 | 81.3 - -**[HellaSwag (Zellers et al., 2019)](https://rowanzellers.com/hellaswag/)** -_(test set)_ - -Model | Overall | In-domain | Zero-shot | ActivityNet | WikiHow ----|---|---|---|---|--- -`roberta.large` | 85.2 | 87.3 | 83.1 | 74.6 | 90.9 - -**[Commonsense QA (Talmor et al., 2019)](https://www.tau-nlp.org/commonsenseqa)** -_(test set)_ - -Model | Accuracy ----|--- -`roberta.large` (single model) | 72.1 -`roberta.large` (ensemble) | 72.5 - -**[Winogrande (Sakaguchi et al., 2019)](https://arxiv.org/abs/1907.10641)** -_(test set)_ - -Model | Accuracy ----|--- -`roberta.large` | 78.1 - -**[XNLI (Conneau et al., 2018)](https://arxiv.org/abs/1809.05053)** -_(TRANSLATE-TEST)_ - -Model | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur ----|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- -`roberta.large.mnli` | 91.3 | 82.91 | 84.27 | 81.24 | 81.74 | 83.13 | 78.28 | 76.79 | 76.64 | 74.17 | 74.05 | 77.5 | 70.9 | 66.65 | 66.81 - -## Example usage - -##### Load RoBERTa from torch.hub (PyTorch >= 1.1): -```python -import torch -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load RoBERTa (for PyTorch 1.0 or custom models): -```python -# Download roberta.large model -wget https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz -tar -xzvf roberta.large.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('/path/to/roberta.large', checkpoint_file='model.pt') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Apply Byte-Pair Encoding (BPE) to input text: -```python -tokens = roberta.encode('Hello world!') -assert tokens.tolist() == [0, 31414, 232, 328, 2] -roberta.decode(tokens) # 'Hello world!' -``` - -##### Extract features from RoBERTa: -```python -# Extract the last layer's features -last_layer_features = roberta.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 5, 1024]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = roberta.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 25 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -##### Use RoBERTa for sentence-pair classification tasks: -```python -# Download RoBERTa already finetuned for MNLI -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') -roberta.eval() # disable dropout for evaluation - -# Encode a pair of sentences and make a prediction -tokens = roberta.encode('Roberta is a heavily optimized version of BERT.', 'Roberta is not very optimized.') -roberta.predict('mnli', tokens).argmax() # 0: contradiction - -# Encode another pair of sentences -tokens = roberta.encode('Roberta is a heavily optimized version of BERT.', 'Roberta is based on BERT.') -roberta.predict('mnli', tokens).argmax() # 2: entailment -``` - -##### Register a new (randomly initialized) classification head: -```python -roberta.register_classification_head('new_task', num_classes=3) -logprobs = roberta.predict('new_task', tokens) # tensor([[-1.1050, -1.0672, -1.1245]], grad_fn=) -``` - -##### Batched prediction: -```python -import torch -from fairseq.data.data_utils import collate_tokens - -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') -roberta.eval() - -batch_of_pairs = [ - ['Roberta is a heavily optimized version of BERT.', 'Roberta is not very optimized.'], - ['Roberta is a heavily optimized version of BERT.', 'Roberta is based on BERT.'], - ['potatoes are awesome.', 'I like to run.'], - ['Mars is very far from earth.', 'Mars is very close.'], -] - -batch = collate_tokens( - [roberta.encode(pair[0], pair[1]) for pair in batch_of_pairs], pad_idx=1 -) - -logprobs = roberta.predict('mnli', batch) -print(logprobs.argmax(dim=1)) -# tensor([0, 2, 1, 0]) -``` - -##### Using the GPU: -```python -roberta.cuda() -roberta.predict('new_task', tokens) # tensor([[-1.1050, -1.0672, -1.1245]], device='cuda:0', grad_fn=) -``` - -## Advanced usage - -#### Filling masks: - -RoBERTa can be used to fill `` tokens in the input. Some examples from the -[Natural Questions dataset](https://ai.google.com/research/NaturalQuestions/): -```python -roberta.fill_mask('The first Star wars movie came out in ', topk=3) -# [('The first Star wars movie came out in 1977', 0.9504708051681519, ' 1977'), ('The first Star wars movie came out in 1978', 0.009986862540245056, ' 1978'), ('The first Star wars movie came out in 1979', 0.009574787691235542, ' 1979')] - -roberta.fill_mask('Vikram samvat calender is official in ', topk=3) -# [('Vikram samvat calender is official in India', 0.21878819167613983, ' India'), ('Vikram samvat calender is official in Delhi', 0.08547237515449524, ' Delhi'), ('Vikram samvat calender is official in Gujarat', 0.07556215673685074, ' Gujarat')] - -roberta.fill_mask(' is the common currency of the European Union', topk=3) -# [('Euro is the common currency of the European Union', 0.9456493854522705, 'Euro'), ('euro is the common currency of the European Union', 0.025748178362846375, 'euro'), ('€ is the common currency of the European Union', 0.011183084920048714, '€')] -``` - -#### Pronoun disambiguation (Winograd Schema Challenge): - -RoBERTa can be used to disambiguate pronouns. First install spaCy and download the English-language model: -```bash -pip install spacy -python -m spacy download en_core_web_lg -``` - -Next load the `roberta.large.wsc` model and call the `disambiguate_pronoun` -function. The pronoun should be surrounded by square brackets (`[]`) and the -query referent surrounded by underscores (`_`), or left blank to return the -predicted candidate text directly: -```python -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.wsc', user_dir='examples/roberta/wsc') -roberta.cuda() # use the GPU (optional) - -roberta.disambiguate_pronoun('The _trophy_ would not fit in the brown suitcase because [it] was too big.') -# True -roberta.disambiguate_pronoun('The trophy would not fit in the brown _suitcase_ because [it] was too big.') -# False - -roberta.disambiguate_pronoun('The city councilmen refused the demonstrators a permit because [they] feared violence.') -# 'The city councilmen' -roberta.disambiguate_pronoun('The city councilmen refused the demonstrators a permit because [they] advocated violence.') -# 'demonstrators' -``` - -See the [RoBERTA Winograd Schema Challenge (WSC) README](wsc/README.md) for more details on how to train this model. - -#### Extract features aligned to words: - -By default RoBERTa outputs one feature vector per BPE token. You can instead -realign the features to match [spaCy's word-level tokenization](https://spacy.io/usage/linguistic-features#tokenization) -with the `extract_features_aligned_to_words` method. This will compute a -weighted average of the BPE-level features for each word and expose them in -spaCy's `Token.vector` attribute: -```python -doc = roberta.extract_features_aligned_to_words('I said, "hello RoBERTa."') -assert len(doc) == 10 -for tok in doc: - print('{:10}{} (...)'.format(str(tok), tok.vector[:5])) -# tensor([-0.1316, -0.0386, -0.0832, -0.0477, 0.1943], grad_fn=) (...) -# I tensor([ 0.0559, 0.1541, -0.4832, 0.0880, 0.0120], grad_fn=) (...) -# said tensor([-0.1565, -0.0069, -0.8915, 0.0501, -0.0647], grad_fn=) (...) -# , tensor([-0.1318, -0.0387, -0.0834, -0.0477, 0.1944], grad_fn=) (...) -# " tensor([-0.0486, 0.1818, -0.3946, -0.0553, 0.0981], grad_fn=) (...) -# hello tensor([ 0.0079, 0.1799, -0.6204, -0.0777, -0.0923], grad_fn=) (...) -# RoBERTa tensor([-0.2339, -0.1184, -0.7343, -0.0492, 0.5829], grad_fn=) (...) -# . tensor([-0.1341, -0.1203, -0.1012, -0.0621, 0.1892], grad_fn=) (...) -# " tensor([-0.1341, -0.1203, -0.1012, -0.0621, 0.1892], grad_fn=) (...) -# tensor([-0.0930, -0.0392, -0.0821, 0.0158, 0.0649], grad_fn=) (...) -``` - -#### Evaluating the `roberta.large.mnli` model: - -Example python code snippet to evaluate accuracy on the MNLI `dev_matched` set. -```python -label_map = {0: 'contradiction', 1: 'neutral', 2: 'entailment'} -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('glue_data/MNLI/dev_matched.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[8], tokens[9], tokens[-1] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('mnli', tokens).argmax().item() - prediction_label = label_map[prediction] - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# Expected output: 0.9060 -``` - -## Finetuning - -- [Finetuning on GLUE](README.glue.md) -- [Finetuning on custom classification tasks (e.g., IMDB)](README.custom_classification.md) -- [Finetuning on Winograd Schema Challenge (WSC)](wsc/README.md) -- [Finetuning on Commonsense QA (CQA)](commonsense_qa/README.md) - -## Pretraining using your own data - -See the [tutorial for pretraining RoBERTa using your own data](README.pretraining.md). - -## Citation - -```bibtex -@article{liu2019roberta, - title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach}, - author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and - Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and - Luke Zettlemoyer and Veselin Stoyanov}, - journal={arXiv preprint arXiv:1907.11692}, - year = {2019}, -} -``` diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/text_to_speech/fastspeech2.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/text_to_speech/fastspeech2.py deleted file mode 100644 index 9c38d0917d997ed5e255ec7a5ed8882b405baffa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/text_to_speech/fastspeech2.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -from torch import nn - -from fairseq.models import (FairseqEncoder, FairseqEncoderModel, register_model, - register_model_architecture) -from fairseq.modules import ( - LayerNorm, PositionalEmbedding, FairseqDropout, MultiheadAttention -) -from fairseq import utils -from fairseq.data.data_utils import lengths_to_padding_mask - - -logger = logging.getLogger(__name__) - - -def model_init(m): - if isinstance(m, nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("relu")) - - -def Embedding(num_embeddings, embedding_dim, padding_idx=None): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - return m - - -class PositionwiseFeedForward(nn.Module): - def __init__(self, in_dim, hidden_dim, kernel_size, dropout): - super().__init__() - self.ffn = nn.Sequential( - nn.Conv1d(in_dim, hidden_dim, kernel_size=kernel_size, - padding=(kernel_size - 1) // 2), - nn.ReLU(), - nn.Conv1d(hidden_dim, in_dim, kernel_size=kernel_size, - padding=(kernel_size - 1) // 2) - ) - self.layer_norm = LayerNorm(in_dim) - self.dropout = self.dropout_module = FairseqDropout( - p=dropout, module_name=self.__class__.__name__ - ) - - def forward(self, x): - # B x T x C - residual = x - x = self.ffn(x.transpose(1, 2)).transpose(1, 2) - x = self.dropout(x) - return self.layer_norm(x + residual) - - -class FFTLayer(torch.nn.Module): - def __init__( - self, embed_dim, n_heads, hidden_dim, kernel_size, dropout, - attention_dropout - ): - super().__init__() - self.self_attn = MultiheadAttention( - embed_dim, n_heads, dropout=attention_dropout, self_attention=True - ) - self.layer_norm = LayerNorm(embed_dim) - self.ffn = PositionwiseFeedForward( - embed_dim, hidden_dim, kernel_size, dropout=dropout - ) - - def forward(self, x, padding_mask=None): - # B x T x C - residual = x - x = x.transpose(0, 1) - x, _ = self.self_attn( - query=x, key=x, value=x, key_padding_mask=padding_mask, - need_weights=False - ) - x = x.transpose(0, 1) - x = self.layer_norm(x + residual) - return self.ffn(x) - - -class LengthRegulator(nn.Module): - def forward(self, x, durations): - # x: B x T x C - out_lens = durations.sum(dim=1) - max_len = out_lens.max() - bsz, seq_len, dim = x.size() - out = x.new_zeros((bsz, max_len, dim)) - - for b in range(bsz): - indices = [] - for t in range(seq_len): - indices.extend([t] * utils.item(durations[b, t])) - indices = torch.tensor(indices, dtype=torch.long).to(x.device) - out_len = utils.item(out_lens[b]) - out[b, :out_len] = x[b].index_select(0, indices) - - return out, out_lens - - -class VariancePredictor(nn.Module): - def __init__(self, args): - super().__init__() - self.conv1 = nn.Sequential( - nn.Conv1d( - args.encoder_embed_dim, args.var_pred_hidden_dim, - kernel_size=args.var_pred_kernel_size, - padding=(args.var_pred_kernel_size - 1) // 2 - ), - nn.ReLU() - ) - self.ln1 = nn.LayerNorm(args.var_pred_hidden_dim) - self.dropout_module = FairseqDropout( - p=args.var_pred_dropout, module_name=self.__class__.__name__ - ) - self.conv2 = nn.Sequential( - nn.Conv1d( - args.var_pred_hidden_dim, args.var_pred_hidden_dim, - kernel_size=args.var_pred_kernel_size, padding=1 - ), - nn.ReLU() - ) - self.ln2 = nn.LayerNorm(args.var_pred_hidden_dim) - self.proj = nn.Linear(args.var_pred_hidden_dim, 1) - - def forward(self, x): - # Input: B x T x C; Output: B x T - x = self.conv1(x.transpose(1, 2)).transpose(1, 2) - x = self.dropout_module(self.ln1(x)) - x = self.conv2(x.transpose(1, 2)).transpose(1, 2) - x = self.dropout_module(self.ln2(x)) - return self.proj(x).squeeze(dim=2) - - -class VarianceAdaptor(nn.Module): - def __init__(self, args): - super().__init__() - self.args = args - self.length_regulator = LengthRegulator() - self.duration_predictor = VariancePredictor(args) - self.pitch_predictor = VariancePredictor(args) - self.energy_predictor = VariancePredictor(args) - - n_bins, steps = self.args.var_pred_n_bins, self.args.var_pred_n_bins - 1 - self.pitch_bins = torch.linspace(args.pitch_min, args.pitch_max, steps) - self.embed_pitch = Embedding(n_bins, args.encoder_embed_dim) - self.energy_bins = torch.linspace(args.energy_min, args.energy_max, steps) - self.embed_energy = Embedding(n_bins, args.encoder_embed_dim) - - def get_pitch_emb(self, x, tgt=None, factor=1.0): - out = self.pitch_predictor(x) - bins = self.pitch_bins.to(x.device) - if tgt is None: - out = out * factor - emb = self.embed_pitch(torch.bucketize(out, bins)) - else: - emb = self.embed_pitch(torch.bucketize(tgt, bins)) - return out, emb - - def get_energy_emb(self, x, tgt=None, factor=1.0): - out = self.energy_predictor(x) - bins = self.energy_bins.to(x.device) - if tgt is None: - out = out * factor - emb = self.embed_energy(torch.bucketize(out, bins)) - else: - emb = self.embed_energy(torch.bucketize(tgt, bins)) - return out, emb - - def forward( - self, x, padding_mask, durations=None, pitches=None, energies=None, - d_factor=1.0, p_factor=1.0, e_factor=1.0 - ): - # x: B x T x C - log_dur_out = self.duration_predictor(x) - dur_out = torch.clamp( - torch.round((torch.exp(log_dur_out) - 1) * d_factor).long(), min=0 - ) - dur_out.masked_fill_(padding_mask, 0) - - pitch_out, pitch_emb = self.get_pitch_emb(x, pitches, p_factor) - x = x + pitch_emb - energy_out, energy_emb = self.get_energy_emb(x, energies, e_factor) - x = x + energy_emb - - x, out_lens = self.length_regulator( - x, dur_out if durations is None else durations - ) - - return x, out_lens, log_dur_out, pitch_out, energy_out - - -class FastSpeech2Encoder(FairseqEncoder): - def __init__(self, args, src_dict, embed_speaker): - super().__init__(src_dict) - self.args = args - self.padding_idx = src_dict.pad() - self.n_frames_per_step = args.n_frames_per_step - self.out_dim = args.output_frame_dim * args.n_frames_per_step - - self.embed_speaker = embed_speaker - self.spk_emb_proj = None - if embed_speaker is not None: - self.spk_emb_proj = nn.Linear( - args.encoder_embed_dim + args.speaker_embed_dim, - args.encoder_embed_dim - ) - - self.dropout_module = FairseqDropout( - p=args.dropout, module_name=self.__class__.__name__ - ) - self.embed_tokens = Embedding( - len(src_dict), args.encoder_embed_dim, padding_idx=self.padding_idx - ) - - self.embed_positions = PositionalEmbedding( - args.max_source_positions, args.encoder_embed_dim, self.padding_idx - ) - self.pos_emb_alpha = nn.Parameter(torch.ones(1)) - self.dec_pos_emb_alpha = nn.Parameter(torch.ones(1)) - - self.encoder_fft_layers = nn.ModuleList( - FFTLayer( - args.encoder_embed_dim, args.encoder_attention_heads, - args.fft_hidden_dim, args.fft_kernel_size, - dropout=args.dropout, attention_dropout=args.attention_dropout - ) - for _ in range(args.encoder_layers) - ) - - self.var_adaptor = VarianceAdaptor(args) - - self.decoder_fft_layers = nn.ModuleList( - FFTLayer( - args.decoder_embed_dim, args.decoder_attention_heads, - args.fft_hidden_dim, args.fft_kernel_size, - dropout=args.dropout, attention_dropout=args.attention_dropout - ) - for _ in range(args.decoder_layers) - ) - - self.out_proj = nn.Linear(args.decoder_embed_dim, self.out_dim) - - self.apply(model_init) - - def forward(self, src_tokens, src_lengths=None, speaker=None, - durations=None, pitches=None, energies=None, **kwargs): - x = self.embed_tokens(src_tokens) - - enc_padding_mask = src_tokens.eq(self.padding_idx) - x += self.pos_emb_alpha * self.embed_positions(enc_padding_mask) - x = self.dropout_module(x) - - for layer in self.encoder_fft_layers: - x = layer(x, enc_padding_mask) - - if self.embed_speaker is not None: - bsz, seq_len, _ = x.size() - emb = self.embed_speaker(speaker).expand(bsz, seq_len, -1) - x = self.spk_emb_proj(torch.cat([x, emb], dim=2)) - - x, out_lens, log_dur_out, pitch_out, energy_out = \ - self.var_adaptor(x, enc_padding_mask, durations, pitches, energies) - - dec_padding_mask = lengths_to_padding_mask(out_lens) - x += self.dec_pos_emb_alpha * self.embed_positions(dec_padding_mask) - for layer in self.decoder_fft_layers: - x = layer(x, dec_padding_mask) - - x = self.out_proj(x) - - return x, out_lens, log_dur_out, pitch_out, energy_out - - -@register_model("fastspeech2") -class FastSpeech2Model(FairseqEncoderModel): - """ - Implementation for https://arxiv.org/abs/2006.04558 - """ - - NON_AUTOREGRESSIVE = True - - @staticmethod - def add_args(parser): - parser.add_argument("--dropout", type=float) - parser.add_argument("--output-frame-dim", type=int) - parser.add_argument("--speaker-embed-dim", type=int) - # FFT blocks - parser.add_argument("--fft-hidden-dim", type=int) - parser.add_argument("--fft-kernel-size", type=int) - parser.add_argument("--attention-dropout", type=float) - parser.add_argument("--encoder-layers", type=int) - parser.add_argument("--encoder-embed-dim", type=int) - parser.add_argument("--encoder-attention-heads", type=int) - parser.add_argument("--decoder-layers", type=int) - parser.add_argument("--decoder-embed-dim", type=int) - parser.add_argument("--decoder-attention-heads", type=int) - # variance predictor - parser.add_argument("--var-pred-n-bins", type=int) - parser.add_argument("--var-pred-hidden-dim", type=int) - parser.add_argument("--var-pred-kernel-size", type=int) - parser.add_argument("--var-pred-dropout", type=float) - - def __init__(self, encoder, args, src_dict): - super().__init__(encoder) - self._num_updates = 0 - - out_dim = args.output_frame_dim * args.n_frames_per_step - self.ctc_proj = None - if getattr(args, "ctc_weight", 0.) > 0.: - self.ctc_proj = nn.Linear(out_dim, len(src_dict)) - - @classmethod - def build_model(cls, args, task): - embed_speaker = task.get_speaker_embeddings(args) - encoder = FastSpeech2Encoder(args, task.src_dict, embed_speaker) - return cls(encoder, args, task.src_dict) - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self._num_updates = num_updates - - def get_normalized_probs(self, net_output, log_probs, sample=None): - logits = self.ctc_proj(net_output[0]) - if log_probs: - return utils.log_softmax(logits.float(), dim=-1) - else: - return utils.softmax(logits.float(), dim=-1) - - -@register_model_architecture("fastspeech2", "fastspeech2") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.2) - args.output_frame_dim = getattr(args, "output_frame_dim", 80) - args.speaker_embed_dim = getattr(args, "speaker_embed_dim", 64) - # FFT blocks - args.fft_hidden_dim = getattr(args, "fft_hidden_dim", 1024) - args.fft_kernel_size = getattr(args, "fft_kernel_size", 9) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.encoder_layers = getattr(args, "encoder_layers", 4) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 2) - args.decoder_layers = getattr(args, "decoder_layers", 4) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 2) - # variance predictor - args.var_pred_n_bins = getattr(args, "var_pred_n_bins", 256) - args.var_pred_hidden_dim = getattr(args, "var_pred_hidden_dim", 256) - args.var_pred_kernel_size = getattr(args, "var_pred_kernel_size", 3) - args.var_pred_dropout = getattr(args, "var_pred_dropout", 0.5) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/dynamic_loss_scaler.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/dynamic_loss_scaler.py deleted file mode 100644 index 43f9be37b9067c520cd794b9a941c57adae25e97..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/dynamic_loss_scaler.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -class DynamicLossScaler(object): - def __init__( - self, - init_scale=2.0 ** 15, - scale_factor=2.0, - scale_window=2000, - tolerance=0.0, - threshold=None, - min_loss_scale=1e-4, - ): - self.loss_scale = init_scale - self.scale_factor = scale_factor - self.scale_window = scale_window - self.tolerance = tolerance - self.threshold = threshold - self._iter = 0 - self._last_overflow_iter = -1 - self._last_rescale_iter = -1 - self._overflows_since_rescale = 0 - self.min_loss_scale = min_loss_scale - - def scale(self, outputs): - return self.loss_scale * outputs - - def update(self): - if (self._iter - self._last_overflow_iter) % self.scale_window == 0: - self.loss_scale *= self.scale_factor - self._last_rescale_iter = self._iter - self._iter += 1 - - def _decrease_loss_scale(self): - self.loss_scale /= self.scale_factor - if self.threshold is not None: - self.loss_scale = max(self.loss_scale, self.threshold) - - def check_overflow(self, grad_norm): - # detect inf and nan - if grad_norm == float("inf") or grad_norm != grad_norm: - # overflow has occured - prev_scale = self.loss_scale - iter_since_rescale = self._iter - self._last_rescale_iter - - self._last_overflow_iter = self._iter - self._overflows_since_rescale += 1 - pct_overflow = self._overflows_since_rescale / float(iter_since_rescale) - if pct_overflow >= self.tolerance: - self._decrease_loss_scale() - self._last_rescale_iter = self._iter - self._overflows_since_rescale = 0 - - if self.loss_scale <= self.min_loss_scale: - # Use FloatingPointError as an uncommon error that parent - # functions can safely catch to stop training. - self.loss_scale = prev_scale - raise FloatingPointError( - ( - "Minimum loss scale reached ({}). Your loss is probably exploding. " - "Try lowering the learning rate, using gradient clipping or " - "increasing the batch size." - ).format(self.min_loss_scale) - ) - - self._iter += 1 - raise OverflowError("setting loss scale to: " + str(self.loss_scale)) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/split_train_valid_docs.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/split_train_valid_docs.py deleted file mode 100644 index ff159785284a13b44626b207d84430c592acaf8f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/split_train_valid_docs.py +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Split a large file into a train and valid set while respecting document -boundaries. Documents should be separated by a single empty line. -""" - -import argparse -import random -import sys - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("input") - parser.add_argument("sample_output", help="train output file") - parser.add_argument("remainder_output", help="valid output file") - parser.add_argument("-k", type=int, help="remainder size") - parser.add_argument( - "--lines", action="store_true", help="split lines instead of docs" - ) - args = parser.parse_args() - - assert args.k is not None - - sample = [] - remainder = [] - num_docs = [0] - - def update_sample(doc): - if len(sample) < args.k: - sample.append(doc.copy()) - else: - i = num_docs[0] - j = random.randrange(i + 1) - if j < args.k: - remainder.append(sample[j]) - sample[j] = doc.copy() - else: - remainder.append(doc.copy()) - num_docs[0] += 1 - doc.clear() - - with open(args.input, "r", encoding="utf-8") as h: - doc = [] - for i, line in enumerate(h): - if line.strip() == "": # empty line indicates new document - update_sample(doc) - else: - doc.append(line) - if args.lines: - update_sample(doc) - if i % 1000000 == 0: - print(i, file=sys.stderr, end="", flush=True) - elif i % 100000 == 0: - print(".", file=sys.stderr, end="", flush=True) - if len(doc) > 0: - update_sample(doc) - print(file=sys.stderr, flush=True) - - assert len(sample) == args.k - - with open(args.sample_output, "w", encoding="utf-8") as out: - first = True - for doc in sample: - if not first and not args.lines: - out.write("\n") - first = False - for line in doc: - out.write(line) - - with open(args.remainder_output, "w", encoding="utf-8") as out: - first = True - for doc in remainder: - if not first and not args.lines: - out.write("\n") - first = False - for line in doc: - out.write(line) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/models/ofa/resnet.py b/spaces/OFA-Sys/OFA-Image_Caption/models/ofa/resnet.py deleted file mode 100644 index 9ad8ee87de4bb579d745ab8302a368ca1749a1fe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/models/ofa/resnet.py +++ /dev/null @@ -1,225 +0,0 @@ -import torch -import torch.nn as nn - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, - the original name is misleading as 'Drop Connect' is a.sh different form of dropout in a.sh separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for - changing the layer and argument names to 'drop path' rather than mix DropConnect as a.sh layer name and use - 'survival rate' as the argument. - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None): - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - assert False - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) - # while original implementation places the stride at the first 1x1 convolution(self.conv1) - # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. - # This variant is also known as ResNet V1.5 and improves accuracy according to - # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. - - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None, drop_path_rate=0.0): - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out = identity + self.drop_path(out) - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, layers, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, - norm_layer=None, drop_path_rate=0.0): - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(Bottleneck, 64, layers[0], drop_path_rate=drop_path_rate) - self.layer2 = self._make_layer(Bottleneck, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0], drop_path_rate=drop_path_rate) - self.layer3 = self._make_layer(Bottleneck, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1], drop_path_rate=drop_path_rate) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.SyncBatchNorm, nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False, drop_path_rate=0.0): - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, blocks)] - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer, drop_path_rate=dpr[i])) - - return nn.Sequential(*layers) - - def _forward_impl(self, x): - # See note [TorchScript super()] - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - return x - - def forward(self, x): - return self._forward_impl(x) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/backtranslation/prepare-wmt18en2de.sh b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/backtranslation/prepare-wmt18en2de.sh deleted file mode 100644 index f6fd275307db50ca84c299440ae02dce49064030..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/backtranslation/prepare-wmt18en2de.sh +++ /dev/null @@ -1,135 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=32000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz" - "http://data.statmt.org/wmt18/translation-task/rapid2016.tgz" - "http://data.statmt.org/wmt17/translation-task/dev.tgz" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-nc-v13.tgz" - "rapid2016.tgz" - "dev.tgz" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.de-en" - "commoncrawl.de-en" - "training-parallel-nc-v13/news-commentary-v13.de-en" - "rapid2016.de-en" -) - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit 1 -fi - -OUTDIR=wmt18_en_de - -src=en -tgt=de -lang=en-de -prep=$OUTDIR -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit 1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%100 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%100 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.de-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/vads.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/vads.py deleted file mode 100644 index 2398da97d8c44b8f3f270b22d5508a003482b4d6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/vads.py +++ /dev/null @@ -1,98 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -from copy import deepcopy -from scipy.signal import lfilter - -import numpy as np -from tqdm import tqdm -import soundfile as sf -import os.path as osp - - -def get_parser(): - parser = argparse.ArgumentParser(description="compute vad segments") - parser.add_argument( - "--rvad-home", - "-r", - help="path to rvad home (see https://github.com/zhenghuatan/rVADfast)", - required=True, - ) - - return parser - - -def rvad(speechproc, path): - winlen, ovrlen, pre_coef, nfilter, nftt = 0.025, 0.01, 0.97, 20, 512 - ftThres = 0.5 - vadThres = 0.4 - opts = 1 - - data, fs = sf.read(path) - assert fs == 16_000, "sample rate must be 16khz" - ft, flen, fsh10, nfr10 = speechproc.sflux(data, fs, winlen, ovrlen, nftt) - - # --spectral flatness -- - pv01 = np.zeros(ft.shape[0]) - pv01[np.less_equal(ft, ftThres)] = 1 - pitch = deepcopy(ft) - - pvblk = speechproc.pitchblockdetect(pv01, pitch, nfr10, opts) - - # --filtering-- - ENERGYFLOOR = np.exp(-50) - b = np.array([0.9770, -0.9770]) - a = np.array([1.0000, -0.9540]) - fdata = lfilter(b, a, data, axis=0) - - # --pass 1-- - noise_samp, noise_seg, n_noise_samp = speechproc.snre_highenergy( - fdata, nfr10, flen, fsh10, ENERGYFLOOR, pv01, pvblk - ) - - # sets noisy segments to zero - for j in range(n_noise_samp): - fdata[range(int(noise_samp[j, 0]), int(noise_samp[j, 1]) + 1)] = 0 - - vad_seg = speechproc.snre_vad( - fdata, nfr10, flen, fsh10, ENERGYFLOOR, pv01, pvblk, vadThres - ) - return vad_seg, data - - -def main(): - parser = get_parser() - args = parser.parse_args() - - sys.path.append(args.rvad_home) - import speechproc - - stride = 160 - lines = sys.stdin.readlines() - root = lines[0].rstrip() - for fpath in tqdm(lines[1:]): - path = osp.join(root, fpath.split()[0]) - vads, wav = rvad(speechproc, path) - - start = None - vad_segs = [] - for i, v in enumerate(vads): - if start is None and v == 1: - start = i * stride - elif start is not None and v == 0: - vad_segs.append((start, i * stride)) - start = None - if start is not None: - vad_segs.append((start, len(wav))) - - print(" ".join(f"{v[0]}:{v[1]}" for v in vad_segs)) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/initialize.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/initialize.py deleted file mode 100644 index 8f6cbafb805b293611e2175721132078123b81d0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/initialize.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import logging -from hydra.core.config_store import ConfigStore -from fairseq.dataclass.configs import FairseqConfig -from omegaconf import DictConfig, OmegaConf - - -logger = logging.getLogger(__name__) - - -def hydra_init(cfg_name="config") -> None: - - cs = ConfigStore.instance() - cs.store(name=f"{cfg_name}", node=FairseqConfig) - - for k in FairseqConfig.__dataclass_fields__: - v = FairseqConfig.__dataclass_fields__[k].default - try: - cs.store(name=k, node=v) - except BaseException: - logger.error(f"{k} - {v}") - raise - - -def add_defaults(cfg: DictConfig) -> None: - """This function adds default values that are stored in dataclasses that hydra doesn't know about """ - - from fairseq.registry import REGISTRIES - from fairseq.tasks import TASK_DATACLASS_REGISTRY - from fairseq.models import ARCH_MODEL_NAME_REGISTRY, MODEL_DATACLASS_REGISTRY - from fairseq.dataclass.utils import merge_with_parent - from typing import Any - - OmegaConf.set_struct(cfg, False) - - for k, v in FairseqConfig.__dataclass_fields__.items(): - field_cfg = cfg.get(k) - if field_cfg is not None and v.type == Any: - dc = None - - if isinstance(field_cfg, str): - field_cfg = DictConfig({"_name": field_cfg}) - field_cfg.__dict__["_parent"] = field_cfg.__dict__["_parent"] - - name = getattr(field_cfg, "_name", None) - - if k == "task": - dc = TASK_DATACLASS_REGISTRY.get(name) - elif k == "model": - name = ARCH_MODEL_NAME_REGISTRY.get(name, name) - dc = MODEL_DATACLASS_REGISTRY.get(name) - elif k in REGISTRIES: - dc = REGISTRIES[k]["dataclass_registry"].get(name) - - if dc is not None: - cfg[k] = merge_with_parent(dc, field_cfg) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/__init__.py deleted file mode 100644 index 9a46b012c573a76e00e489307720fc3fa462c296..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/__init__.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import argparse -import importlib -import os - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import merge_with_parent -from hydra.core.config_store import ConfigStore - -from .fairseq_task import FairseqTask, LegacyFairseqTask # noqa - - -# register dataclass -TASK_DATACLASS_REGISTRY = {} -TASK_REGISTRY = {} -TASK_CLASS_NAMES = set() - - -def setup_task(cfg: FairseqDataclass, **kwargs): - task = None - task_name = getattr(cfg, "task", None) - - if isinstance(task_name, str): - # legacy tasks - task = TASK_REGISTRY[task_name] - if task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = dc.from_namespace(cfg) - else: - task_name = getattr(cfg, "_name", None) - - if task_name and task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = merge_with_parent(dc(), cfg) - task = TASK_REGISTRY[task_name] - - assert ( - task is not None - ), f"Could not infer task type from {cfg}. Available argparse tasks: {TASK_REGISTRY.keys()}. Available hydra tasks: {TASK_DATACLASS_REGISTRY.keys()}" - - return task.setup_task(cfg, **kwargs) - - -def register_task(name, dataclass=None): - """ - New tasks can be added to fairseq with the - :func:`~fairseq.tasks.register_task` function decorator. - - For example:: - - @register_task('classification') - class ClassificationTask(FairseqTask): - (...) - - .. note:: - - All Tasks must implement the :class:`~fairseq.tasks.FairseqTask` - interface. - - Args: - name (str): the name of the task - """ - - def register_task_cls(cls): - if name in TASK_REGISTRY: - raise ValueError("Cannot register duplicate task ({})".format(name)) - if not issubclass(cls, FairseqTask): - raise ValueError( - "Task ({}: {}) must extend FairseqTask".format(name, cls.__name__) - ) - if cls.__name__ in TASK_CLASS_NAMES: - raise ValueError( - "Cannot register task with duplicate class name ({})".format( - cls.__name__ - ) - ) - TASK_REGISTRY[name] = cls - TASK_CLASS_NAMES.add(cls.__name__) - - if dataclass is not None and not issubclass(dataclass, FairseqDataclass): - raise ValueError( - "Dataclass {} must extend FairseqDataclass".format(dataclass) - ) - - cls.__dataclass = dataclass - if dataclass is not None: - TASK_DATACLASS_REGISTRY[name] = dataclass - - cs = ConfigStore.instance() - node = dataclass() - node._name = name - cs.store(name=name, group="task", node=node, provider="fairseq") - - return cls - - return register_task_cls - - -def get_task(name): - return TASK_REGISTRY[name] - - -def import_tasks(tasks_dir, namespace): - for file in os.listdir(tasks_dir): - path = os.path.join(tasks_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - task_name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module(namespace + "." + task_name) - - # expose `task_parser` for sphinx - if task_name in TASK_REGISTRY: - parser = argparse.ArgumentParser(add_help=False) - group_task = parser.add_argument_group("Task name") - # fmt: off - group_task.add_argument('--task', metavar=task_name, - help='Enable this task with: ``--task=' + task_name + '``') - # fmt: on - group_args = parser.add_argument_group( - "Additional command-line arguments" - ) - TASK_REGISTRY[task_name].add_args(group_args) - globals()[task_name + "_parser"] = parser - - -# automatically import any Python files in the tasks/ directory -tasks_dir = os.path.dirname(__file__) -import_tasks(tasks_dir, "fairseq.tasks") diff --git a/spaces/OIUGLK/bingo/src/lib/isomorphic/browser.ts b/spaces/OIUGLK/bingo/src/lib/isomorphic/browser.ts deleted file mode 100644 index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/lib/isomorphic/browser.ts +++ /dev/null @@ -1,11 +0,0 @@ -'use client' - -const debug = console.info.bind(console) - -class WebSocketAlias extends WebSocket { - constructor(address: string | URL, ...args: any) { - super(address) - } -} - -export default { fetch, WebSocket: WebSocketAlias, debug } diff --git a/spaces/Omnibus/2-button-Story-Board/app.py b/spaces/Omnibus/2-button-Story-Board/app.py deleted file mode 100644 index 66bb6e1fc9d8db9dfd8ae61d4e2353b33255aa72..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/2-button-Story-Board/app.py +++ /dev/null @@ -1,488 +0,0 @@ -import gradio as gr -import torch -import os -import sys -from pathlib import Path -from transformers import pipeline -#from diffusers import StableDiffusionPipeline - -import io, base64 -from PIL import Image -import numpy as np -#import mediapy -#from huggingface_hub import snapshot_download - - -models = [ - "prompthero/midjourney-v4-diffusion", - "runwayml/stable-diffusion-v1-5", - "johnslegers/epic-diffusion-v1.1", - "claudfuen/photorealistic-fuen-v1", - "naclbit/trinart_stable_diffusion_v2", - "nitrosocke/Arcane-Diffusion", - "nitrosocke/archer-diffusion", - "nitrosocke/elden-ring-diffusion", - "nitrosocke/redshift-diffusion", - "nitrosocke/spider-verse-diffusion", - "nitrosocke/mo-di-diffusion", - "nitrosocke/classic-anim-diffusion", - "dreamlike-art/dreamlike-diffusion-1.0", - "dreamlike-art/dreamlike-photoreal-2.0", - "wavymulder/wavyfusion", - "wavymulder/Analog-Diffusion", - "prompthero/openjourney", - "dallinmackay/Van-Gogh-diffusion", - "hakurei/waifu-diffusion", - "DGSpitzer/Cyberpunk-Anime-Diffusion", - "Fictiverse/Stable_Diffusion_BalloonArt_Model", - "dallinmackay/Tron-Legacy-diffusion", - "AstraliteHeart/pony-diffusion", - "nousr/robo-diffusion", - "CompVis/stable-diffusion-v1-4", - -] -current_model = models[0] - - - -text_gen1=gr.Interface.load("spaces/Omnibus/MagicPrompt-Stable-Diffusion_link",live=True) -models2=[ - gr.Interface.load(f"models/{models[0]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[1]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[2]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[3]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[4]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[5]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[6]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[7]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[8]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[9]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[10]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[11]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[12]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[13]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[14]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[15]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[16]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[17]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[18]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[19]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[20]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[21]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[22]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[23]}",live=True,preprocess=True), - gr.Interface.load(f"models/{models[24]}",live=True,preprocess=True), - - -] - - -story_gen1 = pipeline("text-generation", "pranavpsv/gpt2-genre-story-generator") - -max_new_tokens=139 - -def on_word_count(word_count): - word_count=max_new_tokens - return max_new_tokens - -def style_add_a(input, persist=None,style=None): - output=(f"{persist} {input}, {style}") - return(output) - -def generate_story(choice, input_text): - query = " <{0}> {1}".format(choice, input_text) - #global generated_text - generated_text = story_gen1(query, max_new_tokens=max_new_tokens) - generated_text = generated_text[0]['generated_text'] - generated_text = generated_text.split('> ')[2] - - return generated_text -def generate_story1(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[0]) > 0: - generated_text1 = generated_text.split('. ')[0] - else: - generated_text1 = "" - generated_text1_ = style_add_a(generated_text1,persist,style_out) - - return generated_text1_,generated_text1 -def generate_story2(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[1]) > 0: - generated_text2 = generated_text.split('. ')[1] - else: - generated_text2 = "" - generated_text2_ = style_add_a(generated_text2,persist,style_out) - - - return generated_text2_,generated_text2 -def generate_story3(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[2]) > 0: - generated_text3 = generated_text.split('. ')[2] - else: - generated_text3 = "" - generated_text3_ = style_add_a(generated_text3,persist,style_out) - - return generated_text3_,generated_text3 -def generate_story4(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[3]) > 0: - generated_text4 = generated_text.split('. ')[3] - else: - generated_text4 = "" - generated_text4_ = style_add_a(generated_text4,persist,style_out) - - return generated_text4_,generated_text4 -def generate_story5(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[4]) > 0: - generated_text5 = generated_text.split('. ')[4] - else: - generated_text5 = "" - generated_text5_ = style_add_a(generated_text5,persist,style_out) - - return generated_text5_,generated_text5 -def generate_story6(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[5]) > 0: - generated_text6 = generated_text.split('. ')[5] - else: - generated_text6 = "" - generated_text6_ = style_add_a(generated_text6,persist,style_out) - - return generated_text6_,generated_text6 -def generate_story7(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[6]) > 0: - generated_text7 = generated_text.split('. ')[6] - else: - generated_text7 = "" - generated_text7_ = style_add_a(generated_text7,persist,style_out) - - return generated_text7_,generated_text7 -def generate_story8(generated_text, persist=None, style_out=None): - if len(generated_text.split('. ')[7]) > 0: - generated_text8 = generated_text.split('. ')[7] - else: - generated_text8 = "" - generated_text8_ = style_add_a(generated_text8,persist,style_out) - - return generated_text8_,generated_text8 - -def text_it1(inputs,style,text_gen1=text_gen1): - go_t1=text_gen1(f"{inputs} {style}") - go_t1 = go_t1.split(f"{inputs}")[1] - - return(go_t1) - - -def set_model(current_model): - current_model = models[current_model] - return gr.update(label=(f"{current_model}")),gr.update(label=(f"{current_model}")),gr.update(label=(f"{current_model}")),gr.update(label=(f"{current_model}")),gr.update(label=(f"{current_model}")),gr.update(label=(f"{current_model}")),gr.update(label=(f"{current_model}")),gr.update(label=(f"{current_model}")), - - -def send_it1(inputs, model_choice): - proc1=models2[model_choice] - output1=proc1(inputs) - return(output1) -def send_it2(inputs, model_choice): - proc2=models2[model_choice] - output2=proc2(inputs) - return(output2) -def send_it3(inputs, model_choice): - proc3=models2[model_choice] - output3=proc3(inputs) - return(output3) -def send_it4(inputs, model_choice): - proc4=models2[model_choice] - output4=proc4(inputs) - return(output4) -def send_it5(inputs, model_choice): - proc5=models2[model_choice] - output5=proc5(inputs) - return(output5) -def send_it6(inputs, model_choice): - proc6=models2[model_choice] - output6=proc6(inputs) - return(output6) -def send_it7(inputs, model_choice): - proc7=models2[model_choice] - output7=proc7(inputs) - return(output7) -def send_it8(inputs, model_choice): - proc8=models2[model_choice] - output8=proc8(inputs) - return(output8) - - -def short_prompt(inputs): - return(inputs) - -def style_add(input,style): - output=(f"{input}, {style}") - return(output) - - -def topic_persist(input,persist): - output=(f"{persist}, {input}") - return(output) -def example_func(input_text): - input_text="There was a Punk Rock battle. the battle was in outerspace. the robots were winning" - input_story_type='sci_fi' - style_seed="Anime Style" - persist="Spacepunks" - - - go_t1=text_gen1(f"{input_text} {style_seed}") - style_out = go_t1.split(f"{input_text}")[1] - - - generated_text = generate_story(input_story_type,input_text) - #gen_t_len=len(generated_text) - - return(input_text,input_story_type,generated_text,style_seed,persist,style_out) - - - -with gr.Blocks() as myface: - with gr.Row(): - with gr.Tab("2-buttons"): - with gr.Row(): - example_btn=gr.Button("Load Example") - run=gr.Button("Generate Images") - - - - with gr.Tab("Title"): - gr.HTML(""" Cool Story
-
-

- Cool Story -

-
-
-
-

- Poor-man's Latent Images
- Generate similar images from story prompts and style
- -

-
- """) - - with gr.Tab("Description"): - gr.Column() - with gr.Column(): - gr.HTML("""
-

Enter your Prompt into the "Starting Text" box and click "Generate Story"
- The story will be displayed in the "Generated Story" box, and segment into each box below an image.
- Add a period "." in the main story for further segmenting.
- Enter a style seed, eg. "street art style". Click "Generate Style"
- - When you are satisfied with the prompt that is in the "Generated Style" box, click "Use Style" to add the style to the text boxes

- You can adjust the Generated Story by typing in the "Generated Story" box
-

- Not responsible for content, use at your own risk. - -

-
""") - gr.Column() - - with gr.Tab("Credits"): - with gr.Row(): - gr.Column() - with gr.Column(style="text-align:left;"): - gr.HTML(""" -
-
-

pranavpsv/gpt2-genre-story-generator

-

Finetuned Diffusion

-

Magic Prompt Stable Diffusion

-

Magic Diffusion

-

Links to models in the top right of this windows!

- - - -
- """) - gr.Column() - - - with gr.Tab("Tools"): - with gr.Tab("View"): - with gr.Row(): - with gr.Column(style="width=50%, height=70%"): - gr.Pil(label="Crop") - with gr.Column(style="width=50%, height=70%"): - gr.Pil(label="Crop") - - - with gr.Tab("Draw"): - with gr.Column(style="width=50%, height=70%"): - gr.Pil(label="Crop") - with gr.Column(style="width=50%, height=70%"): - gr.Pil(label="Draw") - - - gr.ImagePaint(label="Draw") - - with gr.Tab("Text"): - with gr.Row(): - - with gr.Column(scale=50): - gr.Textbox(label="", lines=8, interactive=True) - - - with gr.Column(scale=50): - gr.Textbox(label="", lines=8, interactive=True) - - with gr.Tab("Color Picker"): - with gr.Row(): - - with gr.Column(scale=50): - gr.ColorPicker(label="Color", interactive=True) - - - with gr.Column(scale=50): - gr.ImagePaint(label="Draw", interactive=True) - with gr.Row(): - input_text = gr.Textbox(placeholder='A punk rock battle in', label="Starting Text", lines=1) - style_seed = gr.Textbox(placeholder='street art style', label="Style Seed", lines=1) - persist_topic = gr.Textbox(placeholder='The Cat', label="Persistent Topic", lines=1) - - - #with gr.Row(): - with gr.Accordion("More Buttons", open=False): - with gr.Row(): - button_gen_story = gr.Button("Generate Story") - see_prompts=gr.Button("Generate Style") - persist_btn=gr.Button("Use Topic") - with gr.Row(): - model_name1 = gr.Dropdown(show_label=False, choices=[m for m in models], type="index", value=current_model, interactive=True) - input_story_type = gr.Radio(choices=['superhero', 'action', 'drama', 'horror', 'thriller', 'sci_fi'], value='sci_fi', label="Genre") - with gr.Row(): - with gr.Column(): - output_generated_story = gr.Textbox(label="Generated Story", lines=8) - word_count1=gr.Slider(50, 250, value=139, label="Max Tokens") - with gr.Column(): - output_style = gr.Textbox(label="Generated Style", lines=8) - use_style=gr.Button("Use Style") - button_load_story = gr.Button("Remove Style") - - - with gr.Tab("Main"): - with gr.Row(): - with gr.Column(): - output1=gr.Image(label=(f"{current_model}")) - magic_1=gr.Textbox(lines=4) - magic1=gr.Textbox(lines=4) - with gr.Column(): - output2=gr.Image(label=(f"{current_model}")) - magic_2=gr.Textbox(lines=4) - magic2=gr.Textbox(lines=4) - with gr.Column(): - output3=gr.Image(label=(f"{current_model}")) - magic_3=gr.Textbox(lines=4) - magic3=gr.Textbox(lines=4) - with gr.Column(): - output4=gr.Image(label=(f"{current_model}")) - magic_4=gr.Textbox(lines=4) - magic4=gr.Textbox(lines=4) - with gr.Row(): - with gr.Column(): - output5=gr.Image(label=(f"{current_model}")) - magic_5=gr.Textbox(lines=4) - magic5=gr.Textbox(lines=4) - with gr.Column(): - output6=gr.Image(label=(f"{current_model}")) - magic_6=gr.Textbox(lines=4) - magic6=gr.Textbox(lines=4) - with gr.Column(): - output7=gr.Image(label=(f"{current_model}")) - magic_7=gr.Textbox(lines=4) - magic7=gr.Textbox(lines=4) - with gr.Column(): - output8=gr.Image(label=(f"{current_model}")) - magic_8=gr.Textbox(lines=4) - magic8=gr.Textbox(lines=4) - - def short_prompt(inputs): - return(inputs) - model_name1.change(set_model,inputs=model_name1,outputs=[output1,output2,output3,output4,output5,output6,output7,output8]) - example_btn.click(example_func, inputs=[], outputs=[input_text,input_story_type,output_generated_story,style_seed,persist_topic,output_style]) - #example_btn.click(example_func, inputs=[], outputs=[input_text,input_story_type,output_generated_story,style_seed,persist_topic,output_style,magic1,magic2,magic3,magic4,magic5,magic6,magic7,magic8]) - word_count1.change(fn=on_word_count,inputs=word_count1,outputs=[]) - button_gen_story.click(fn=generate_story, inputs=[input_story_type,input_text], outputs=[output_generated_story]) - - persist_btn.click(fn=generate_story1, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic1,magic_1]) - persist_btn.click(fn=generate_story2, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic2,magic_2]) - persist_btn.click(fn=generate_story3, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic3,magic_3]) - persist_btn.click(fn=generate_story4, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic4,magic_4]) - persist_btn.click(fn=generate_story5, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic5,magic_5]) - persist_btn.click(fn=generate_story6, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic6,magic_6]) - persist_btn.click(fn=generate_story7, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic7,magic_7]) - persist_btn.click(fn=generate_story8, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic8,magic_8]) - - #persist_topic.change(fn=generate_story1, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic1]) - #persist_topic.change(fn=generate_story2, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic2]) - #persist_topic.change(fn=generate_story3, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic3]) - #persist_topic.change(fn=generate_story4, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic4]) - #persist_topic.change(fn=generate_story5, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic5]) - #persist_topic.change(fn=generate_story6, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic6]) - #persist_topic.change(fn=generate_story7, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic7]) - #persist_topic.change(fn=generate_story8, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic8]) - - #style_seed.change(fn=generate_story1, inputs=[output_generated_story], outputs=[magic1]) - output_style.change(fn=generate_story1, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic1,magic_1]) - output_style.change(fn=generate_story2, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic2,magic_2]) - output_style.change(fn=generate_story3, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic3,magic_3]) - output_style.change(fn=generate_story4, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic4,magic_4]) - output_style.change(fn=generate_story5, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic5,magic_5]) - output_style.change(fn=generate_story6, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic6,magic_6]) - output_style.change(fn=generate_story7, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic7,magic_7]) - output_style.change(fn=generate_story8, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic8,magic_8]) - - - output_generated_story.change(fn=generate_story1, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic1,magic_1]) - output_generated_story.change(fn=generate_story2, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic2,magic_2]) - output_generated_story.change(fn=generate_story3, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic3,magic_3]) - output_generated_story.change(fn=generate_story4, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic4,magic_4]) - output_generated_story.change(fn=generate_story5, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic5,magic_5]) - output_generated_story.change(fn=generate_story6, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic6,magic_6]) - output_generated_story.change(fn=generate_story7, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic7,magic_7]) - output_generated_story.change(fn=generate_story8, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic8,magic_8]) - - button_load_story.click(fn=generate_story1, inputs=[output_generated_story], outputs=[magic1,magic_1]) - button_load_story.click(fn=generate_story2, inputs=[output_generated_story], outputs=[magic2,magic_2]) - button_load_story.click(fn=generate_story3, inputs=[output_generated_story], outputs=[magic3,magic_3]) - button_load_story.click(fn=generate_story4, inputs=[output_generated_story], outputs=[magic4,magic_4]) - button_load_story.click(fn=generate_story5, inputs=[output_generated_story], outputs=[magic5,magic_5]) - button_load_story.click(fn=generate_story6, inputs=[output_generated_story], outputs=[magic6,magic_6]) - button_load_story.click(fn=generate_story7, inputs=[output_generated_story], outputs=[magic7,magic_7]) - button_load_story.click(fn=generate_story8, inputs=[output_generated_story], outputs=[magic8,magic_8]) - - use_style.click(fn=generate_story1, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic1,magic_1]) - use_style.click(fn=generate_story2, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic2,magic_2]) - use_style.click(fn=generate_story3, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic3,magic_3]) - use_style.click(fn=generate_story4, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic4,magic_4]) - use_style.click(fn=generate_story5, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic5,magic_5]) - use_style.click(fn=generate_story6, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic6,magic_6]) - use_style.click(fn=generate_story7, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic7,magic_7]) - use_style.click(fn=generate_story8, inputs=[output_generated_story,persist_topic, output_style], outputs=[magic8,magic_8]) - - see_prompts.click(text_it1,inputs=[input_text,style_seed],outputs=output_style) - - run.click(send_it1, inputs=[magic1,model_name1], outputs=[output1]) - run.click(send_it2, inputs=[magic2,model_name1], outputs=[output2]) - run.click(send_it3, inputs=[magic3,model_name1], outputs=[output3]) - run.click(send_it4, inputs=[magic4,model_name1], outputs=[output4]) - run.click(send_it5, inputs=[magic5,model_name1], outputs=[output5]) - run.click(send_it6, inputs=[magic6,model_name1], outputs=[output6]) - run.click(send_it7, inputs=[magic7,model_name1], outputs=[output7]) - run.click(send_it8, inputs=[magic8,model_name1], outputs=[output8]) -if __name__ == "__main__": - myface.queue(concurrency_count=100) - myface.launch(enable_queue=True, inline=False, max_threads=400) \ No newline at end of file diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN/cloc.sh b/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN/cloc.sh deleted file mode 100644 index 2dc336fc8aa81350fbe9a03c543927734ff00c2b..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN/cloc.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/bash - -# This scripts counts the lines of code and comments in all source files -# and prints the results to the command line. It uses the commandline tool -# "cloc". You can either pass --loc, --comments or --percentage to show the -# respective values only. -# Some parts below need to be adapted to your project! - -# Get the location of this script. -SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" - -# Run cloc - this counts code lines, blank lines and comment lines -# for the specified languages. You will need to change this accordingly. -# For C++, you could use "C++,C/C++ Header" for example. -# We are only interested in the summary, therefore the tail -1 -SUMMARY="$(cloc "${SCRIPT_DIR}" --include-lang="Python" --md | tail -1)" - -# The $SUMMARY is one line of a markdown table and looks like this: -# SUM:|101|3123|2238|10783 -# We use the following command to split it into an array. -IFS='|' read -r -a TOKENS <<<"$SUMMARY" - -# Store the individual tokens for better readability. -NUMBER_OF_FILES=${TOKENS[1]} -COMMENT_LINES=${TOKENS[3]} -LINES_OF_CODE=${TOKENS[4]} - -# To make the estimate of commented lines more accurate, we have to -# subtract any copyright header which is included in each file. -# For Fly-Pie, this header has the length of five lines. -# All dumb comments like those /////////// or those // ------------ -# are also subtracted. As cloc does not count inline comments, -# the overall estimate should be rather conservative. -# Change the lines below according to your project. -# DUMB_COMMENTS="$(grep -r -E '//////|// -----' "${SCRIPT_DIR}" | wc -l)" -# COMMENT_LINES=$(($COMMENT_LINES - 5 * $NUMBER_OF_FILES - $DUMB_COMMENTS)) - -# Print all results if no arguments are given. -if [[ $# -eq 0 ]]; then - awk -v a=$LINES_OF_CODE \ - 'BEGIN {printf "Lines of source code: %6.1fk\n", a/1000}' - awk -v a=$COMMENT_LINES \ - 'BEGIN {printf "Lines of comments: %6.1fk\n", a/1000}' - awk -v a=$COMMENT_LINES -v b=$LINES_OF_CODE \ - 'BEGIN {printf "Comment Percentage: %6.1f%\n", 100*a/b}' - exit 0 -fi - -# Show lines of code if --loc is given. -if [[ $* == *--loc* ]]; then - awk -v a=$LINES_OF_CODE \ - 'BEGIN {printf "%.1fk\n", a/1000}' -fi - -# Show lines of comments if --comments is given. -if [[ $* == *--comments* ]]; then - awk -v a=$COMMENT_LINES \ - 'BEGIN {printf "%.1fk\n", a/1000}' -fi - -# Show precentage of comments if --percentage is given. -if [[ $* == *--percentage* ]]; then - awk -v a=$COMMENT_LINES -v b=$LINES_OF_CODE \ - 'BEGIN {printf "%.1f\n", 100*a/b}' -fi diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/docker/build.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/docker/build.sh deleted file mode 100644 index d70b747ff1282b9c60cf8b8b352b6b7701e1a254..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/docker/build.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash - -BASEDIR="$(dirname $0)" - -docker build -t windj007/lama -f "$BASEDIR/Dockerfile" "$BASEDIR" diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/image/io.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/image/io.py deleted file mode 100644 index d3fa2e8cc06b1a7b0b69de6406980b15d61a1e5d..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/image/io.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os.path as osp -from pathlib import Path - -import cv2 -import numpy as np -from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION, - IMREAD_UNCHANGED) - -from annotator.uniformer.mmcv.utils import check_file_exist, is_str, mkdir_or_exist - -try: - from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG -except ImportError: - TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None - -try: - from PIL import Image, ImageOps -except ImportError: - Image = None - -try: - import tifffile -except ImportError: - tifffile = None - -jpeg = None -supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile'] - -imread_flags = { - 'color': IMREAD_COLOR, - 'grayscale': IMREAD_GRAYSCALE, - 'unchanged': IMREAD_UNCHANGED, - 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR, - 'grayscale_ignore_orientation': - IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE -} - -imread_backend = 'cv2' - - -def use_backend(backend): - """Select a backend for image decoding. - - Args: - backend (str): The image decoding backend type. Options are `cv2`, - `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG) - and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg` - file format. - """ - assert backend in supported_backends - global imread_backend - imread_backend = backend - if imread_backend == 'turbojpeg': - if TurboJPEG is None: - raise ImportError('`PyTurboJPEG` is not installed') - global jpeg - if jpeg is None: - jpeg = TurboJPEG() - elif imread_backend == 'pillow': - if Image is None: - raise ImportError('`Pillow` is not installed') - elif imread_backend == 'tifffile': - if tifffile is None: - raise ImportError('`tifffile` is not installed') - - -def _jpegflag(flag='color', channel_order='bgr'): - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'color': - if channel_order == 'bgr': - return TJPF_BGR - elif channel_order == 'rgb': - return TJCS_RGB - elif flag == 'grayscale': - return TJPF_GRAY - else: - raise ValueError('flag must be "color" or "grayscale"') - - -def _pillow2array(img, flag='color', channel_order='bgr'): - """Convert a pillow image to numpy array. - - Args: - img (:obj:`PIL.Image.Image`): The image loaded using PIL - flag (str): Flags specifying the color type of a loaded image, - candidates are 'color', 'grayscale' and 'unchanged'. - Default to 'color'. - channel_order (str): The channel order of the output image array, - candidates are 'bgr' and 'rgb'. Default to 'bgr'. - - Returns: - np.ndarray: The converted numpy array - """ - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'unchanged': - array = np.array(img) - if array.ndim >= 3 and array.shape[2] >= 3: # color image - array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR - else: - # Handle exif orientation tag - if flag in ['color', 'grayscale']: - img = ImageOps.exif_transpose(img) - # If the image mode is not 'RGB', convert it to 'RGB' first. - if img.mode != 'RGB': - if img.mode != 'LA': - # Most formats except 'LA' can be directly converted to RGB - img = img.convert('RGB') - else: - # When the mode is 'LA', the default conversion will fill in - # the canvas with black, which sometimes shadows black objects - # in the foreground. - # - # Therefore, a random color (124, 117, 104) is used for canvas - img_rgba = img.convert('RGBA') - img = Image.new('RGB', img_rgba.size, (124, 117, 104)) - img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha - if flag in ['color', 'color_ignore_orientation']: - array = np.array(img) - if channel_order != 'rgb': - array = array[:, :, ::-1] # RGB to BGR - elif flag in ['grayscale', 'grayscale_ignore_orientation']: - img = img.convert('L') - array = np.array(img) - else: - raise ValueError( - 'flag must be "color", "grayscale", "unchanged", ' - f'"color_ignore_orientation" or "grayscale_ignore_orientation"' - f' but got {flag}') - return array - - -def imread(img_or_path, flag='color', channel_order='bgr', backend=None): - """Read an image. - - Args: - img_or_path (ndarray or str or Path): Either a numpy array or str or - pathlib.Path. If it is a numpy array (loaded image), then - it will be returned as is. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale`, `unchanged`, - `color_ignore_orientation` and `grayscale_ignore_orientation`. - By default, `cv2` and `pillow` backend would rotate the image - according to its EXIF info unless called with `unchanged` or - `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend - always ignore image's EXIF info regardless of the flag. - The `turbojpeg` backend only supports `color` and `grayscale`. - channel_order (str): Order of channel, candidates are `bgr` and `rgb`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. - If backend is None, the global imread_backend specified by - ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if isinstance(img_or_path, Path): - img_or_path = str(img_or_path) - - if isinstance(img_or_path, np.ndarray): - return img_or_path - elif is_str(img_or_path): - check_file_exist(img_or_path, - f'img file does not exist: {img_or_path}') - if backend == 'turbojpeg': - with open(img_or_path, 'rb') as in_file: - img = jpeg.decode(in_file.read(), - _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - img = Image.open(img_or_path) - img = _pillow2array(img, flag, channel_order) - return img - elif backend == 'tifffile': - img = tifffile.imread(img_or_path) - return img - else: - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imread(img_or_path, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - else: - raise TypeError('"img" must be a numpy array or a str or ' - 'a pathlib.Path object') - - -def imfrombytes(content, flag='color', channel_order='bgr', backend=None): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Same as :func:`imread`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the - global imread_backend specified by ``mmcv.use_backend()`` will be - used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if backend == 'turbojpeg': - img = jpeg.decode(content, _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - buff = io.BytesIO(content) - img = Image.open(buff) - img = _pillow2array(img, flag, channel_order) - return img - else: - img_np = np.frombuffer(content, np.uint8) - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imdecode(img_np, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = osp.abspath(osp.dirname(file_path)) - mkdir_or_exist(dir_name) - return cv2.imwrite(file_path, img, params) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/dist_utils.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/dist_utils.py deleted file mode 100644 index d3a1ef3fda5ceeb31bf15a73779da1b1903ab0fe..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/dist_utils.py +++ /dev/null @@ -1,164 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import os -import subprocess -from collections import OrderedDict - -import torch -import torch.multiprocessing as mp -from torch import distributed as dist -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - - -def init_dist(launcher, backend='nccl', **kwargs): - if mp.get_start_method(allow_none=True) is None: - mp.set_start_method('spawn') - if launcher == 'pytorch': - _init_dist_pytorch(backend, **kwargs) - elif launcher == 'mpi': - _init_dist_mpi(backend, **kwargs) - elif launcher == 'slurm': - _init_dist_slurm(backend, **kwargs) - else: - raise ValueError(f'Invalid launcher type: {launcher}') - - -def _init_dist_pytorch(backend, **kwargs): - # TODO: use local_rank instead of rank % num_gpus - rank = int(os.environ['RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_mpi(backend, **kwargs): - # TODO: use local_rank instead of rank % num_gpus - rank = int(os.environ['OMPI_COMM_WORLD_RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_slurm(backend, port=None): - """Initialize slurm distributed training environment. - - If argument ``port`` is not specified, then the master port will be system - environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system - environment variable, then a default port ``29500`` will be used. - - Args: - backend (str): Backend of torch.distributed. - port (int, optional): Master port. Defaults to None. - """ - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(proc_id % num_gpus) - addr = subprocess.getoutput( - f'scontrol show hostname {node_list} | head -n1') - # specify master port - if port is not None: - os.environ['MASTER_PORT'] = str(port) - elif 'MASTER_PORT' in os.environ: - pass # use MASTER_PORT in the environment variable - else: - # 29500 is torch.distributed default port - os.environ['MASTER_PORT'] = '29500' - # use MASTER_ADDR in the environment variable if it already exists - if 'MASTER_ADDR' not in os.environ: - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['RANK'] = str(proc_id) - dist.init_process_group(backend=backend) - - -def get_dist_info(): - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - return rank, world_size - - -def master_only(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper - - -def allreduce_params(params, coalesce=True, bucket_size_mb=-1): - """Allreduce parameters. - - Args: - params (list[torch.Parameters]): List of parameters or buffers of a - model. - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - _, world_size = get_dist_info() - if world_size == 1: - return - params = [param.data for param in params] - if coalesce: - _allreduce_coalesced(params, world_size, bucket_size_mb) - else: - for tensor in params: - dist.all_reduce(tensor.div_(world_size)) - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - """Allreduce gradients. - - Args: - params (list[torch.Parameters]): List of parameters of a model - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - grads = [ - param.grad.data for param in params - if param.requires_grad and param.grad is not None - ] - _, world_size = get_dist_info() - if world_size == 1: - return - if coalesce: - _allreduce_coalesced(grads, world_size, bucket_size_mb) - else: - for tensor in grads: - dist.all_reduce(tensor.div_(world_size)) - - -def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1): - if bucket_size_mb > 0: - bucket_size_bytes = bucket_size_mb * 1024 * 1024 - buckets = _take_tensors(tensors, bucket_size_bytes) - else: - buckets = OrderedDict() - for tensor in tensors: - tp = tensor.type() - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(tensor) - buckets = buckets.values() - - for bucket in buckets: - flat_tensors = _flatten_dense_tensors(bucket) - dist.all_reduce(flat_tensors) - flat_tensors.div_(world_size) - for tensor, synced in zip( - bucket, _unflatten_dense_tensors(flat_tensors, bucket)): - tensor.copy_(synced) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/seg/sampler/base_pixel_sampler.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/seg/sampler/base_pixel_sampler.py deleted file mode 100644 index b75b1566c9f18169cee51d4b55d75e0357b69c57..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/seg/sampler/base_pixel_sampler.py +++ /dev/null @@ -1,12 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BasePixelSampler(metaclass=ABCMeta): - """Base class of pixel sampler.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def sample(self, seg_logit, seg_label): - """Placeholder for sample function.""" diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/losses/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/losses/__init__.py deleted file mode 100644 index beca72045694273d63465bac2f27dbc6672271db..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/losses/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -from .accuracy import Accuracy, accuracy -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .dice_loss import DiceLoss -from .lovasz_loss import LovaszLoss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'reduce_loss', - 'weight_reduce_loss', 'weighted_loss', 'LovaszLoss', 'DiceLoss' -] diff --git a/spaces/Pie31415/control-animation/annotator/canny/__init__.py b/spaces/Pie31415/control-animation/annotator/canny/__init__.py deleted file mode 100644 index cb0da951dc838ec9dec2131007e036113281800b..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/canny/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -import cv2 - - -class CannyDetector: - def __call__(self, img, low_threshold, high_threshold): - return cv2.Canny(img, low_threshold, high_threshold) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/exp/upernet_global_small/test_config_h32.py b/spaces/Pie31415/control-animation/annotator/uniformer/exp/upernet_global_small/test_config_h32.py deleted file mode 100644 index a31e3874f76f9f7b089ac8834d85df2441af9b0e..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/exp/upernet_global_small/test_config_h32.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=True, - window_size=32 - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/iou3d.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/iou3d.py deleted file mode 100644 index 6fc71979190323f44c09f8b7e1761cf49cd2d76b..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/iou3d.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'iou3d_boxes_iou_bev_forward', 'iou3d_nms_forward', - 'iou3d_nms_normal_forward' -]) - - -def boxes_iou_bev(boxes_a, boxes_b): - """Calculate boxes IoU in the Bird's Eye View. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 5). - boxes_b (torch.Tensor): Input boxes b with shape (N, 5). - - Returns: - ans_iou (torch.Tensor): IoU result with shape (M, N). - """ - ans_iou = boxes_a.new_zeros( - torch.Size((boxes_a.shape[0], boxes_b.shape[0]))) - - ext_module.iou3d_boxes_iou_bev_forward(boxes_a.contiguous(), - boxes_b.contiguous(), ans_iou) - - return ans_iou - - -def nms_bev(boxes, scores, thresh, pre_max_size=None, post_max_size=None): - """NMS function GPU implementation (for BEV boxes). The overlap of two - boxes for IoU calculation is defined as the exact overlapping area of the - two boxes. In this function, one can also set ``pre_max_size`` and - ``post_max_size``. - - Args: - boxes (torch.Tensor): Input boxes with the shape of [N, 5] - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of boxes with the shape of [N]. - thresh (float): Overlap threshold of NMS. - pre_max_size (int, optional): Max size of boxes before NMS. - Default: None. - post_max_size (int, optional): Max size of boxes after NMS. - Default: None. - - Returns: - torch.Tensor: Indexes after NMS. - """ - assert boxes.size(1) == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - - if pre_max_size is not None: - order = order[:pre_max_size] - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = ext_module.iou3d_nms_forward(boxes, keep, thresh) - keep = order[keep[:num_out].cuda(boxes.device)].contiguous() - if post_max_size is not None: - keep = keep[:post_max_size] - return keep - - -def nms_normal_bev(boxes, scores, thresh): - """Normal NMS function GPU implementation (for BEV boxes). The overlap of - two boxes for IoU calculation is defined as the exact overlapping area of - the two boxes WITH their yaw angle set to 0. - - Args: - boxes (torch.Tensor): Input boxes with shape (N, 5). - scores (torch.Tensor): Scores of predicted boxes with shape (N). - thresh (float): Overlap threshold of NMS. - - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - assert boxes.shape[1] == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = ext_module.iou3d_nms_normal_forward(boxes, keep, thresh) - return order[keep[:num_out].cuda(boxes.device)].contiguous() diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/coco/coco_eval.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/coco/coco_eval.py deleted file mode 100644 index 9b63f95cf86cf4a2f9b1f961da2e8db89739c5bf..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/coco/coco_eval.py +++ /dev/null @@ -1,531 +0,0 @@ -import logging -import tempfile -import os -import torch -import numpy as np -import json - -from collections import OrderedDict -from tqdm import tqdm - -from maskrcnn_benchmark.modeling.roi_heads.mask_head.inference import Masker -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou - - -def do_coco_evaluation( - dataset, - predictions, - box_only, - output_folder, - iou_types, - expected_results, - expected_results_sigma_tol, -): - logger = logging.getLogger("maskrcnn_benchmark.inference") - - if box_only: - logger.info("Evaluating bbox proposals") - if dataset.coco is None and output_folder: - json_results = prepare_for_tsv_detection(predictions, dataset) - with open(os.path.join(output_folder, "box_proposals.json"), "w") as f: - json.dump(json_results, f) - return None - areas = {"all": "", "small": "s", "medium": "m", "large": "l"} - res = COCOResults("box_proposal") - for limit in [100, 1000]: - for area, suffix in areas.items(): - stats = evaluate_box_proposals( - predictions, dataset, area=area, limit=limit - ) - key = "AR{}@{:d}".format(suffix, limit) - res.results["box_proposal"][key] = stats["ar"].item() - logger.info(res) - check_expected_results(res, expected_results, expected_results_sigma_tol) - if output_folder: - torch.save(res, os.path.join(output_folder, "box_proposals.pth")) - return res, predictions - logger.info("Preparing results for COCO format") - coco_results = {} - if "bbox" in iou_types: - logger.info("Preparing bbox results") - if dataset.coco is None: - coco_results["bbox"] = prepare_for_tsv_detection(predictions, dataset) - else: - coco_results["bbox"] = prepare_for_coco_detection(predictions, dataset) - if "segm" in iou_types: - logger.info("Preparing segm results") - coco_results["segm"] = prepare_for_coco_segmentation(predictions, dataset) - if 'keypoints' in iou_types: - logger.info('Preparing keypoints results') - coco_results['keypoints'] = prepare_for_coco_keypoint(predictions, dataset) - - results = COCOResults(*iou_types) - logger.info("Evaluating predictions") - for iou_type in iou_types: - with tempfile.NamedTemporaryFile() as f: - file_path = f.name - if output_folder: - file_path = os.path.join(output_folder, iou_type + ".json") - if dataset.coco: - res = evaluate_predictions_on_coco( - dataset.coco, coco_results[iou_type], file_path, iou_type - ) - results.update(res) - elif output_folder: - with open(file_path, "w") as f: - json.dump(coco_results[iou_type], f) - - logger.info(results) - check_expected_results(results, expected_results, expected_results_sigma_tol) - if output_folder: - torch.save(results, os.path.join(output_folder, "coco_results.pth")) - return results, coco_results - - -def prepare_for_tsv_detection(predictions, dataset): - # assert isinstance(dataset, COCODataset) - proposal_results = [] - image_list = [] - for im_id, prediction in enumerate(predictions): - image_info = dataset.get_img_info(im_id) - if len(prediction) == 0: - continue - - # TODO replace with get_img_info? - image_id = image_info["id"] - image_width = image_info["width"] - image_height = image_info["height"] - prediction = prediction.resize((image_width, image_height)) - prediction = prediction.convert("xywh") - - boxes = prediction.bbox.tolist() - scores = prediction.get_field("scores").tolist() - labels = prediction.get_field("labels").tolist() - if prediction.has_field("centers"): - centers = prediction.get_field("centers") - else: - centers = None - - for k, box in enumerate(boxes): - proposal = { - "image_id": image_id, - "category_id": labels[k], - "bbox": box, - "score": scores[k], - "area": image_width * image_height, - "iscrowd": 0, - } - if centers is not None: - proposal.update(center=centers[k].tolist()) - proposal_results.append(proposal) - - image_list.append(image_info) - - # categories = [{'supercategory': 'proposal', 'id': 0, 'name': 'proposal'}] - return dict(images=image_list, annotations=proposal_results) - - -def prepare_for_coco_detection(predictions, dataset): - # assert isinstance(dataset, COCODataset) - coco_results = [] - for image_id, prediction in enumerate(predictions): - original_id = dataset.id_to_img_map[image_id] - if len(prediction) == 0: - continue - - # TODO replace with get_img_info? - image_width = dataset.coco.imgs[original_id]["width"] - image_height = dataset.coco.imgs[original_id]["height"] - prediction = prediction.resize((image_width, image_height)) - prediction = prediction.convert("xywh") - - boxes = prediction.bbox.tolist() - scores = prediction.get_field("scores").tolist() - labels = prediction.get_field("labels").tolist() - - for k, box in enumerate(boxes): - if labels[k] in dataset.contiguous_category_id_to_json_id: - coco_results.append( - { - "image_id": original_id, - "category_id": dataset.contiguous_category_id_to_json_id[labels[k]], - "bbox": box, - "score": scores[k], - }) - - return coco_results - - -def prepare_for_coco_segmentation(predictions, dataset): - import pycocotools.mask as mask_util - import numpy as np - - masker = Masker(threshold=0.5, padding=1) - # assert isinstance(dataset, COCODataset) - coco_results = [] - for image_id, prediction in tqdm(enumerate(predictions)): - original_id = dataset.id_to_img_map[image_id] - if len(prediction) == 0: - continue - - # TODO replace with get_img_info? - image_width = dataset.coco.imgs[original_id]["width"] - image_height = dataset.coco.imgs[original_id]["height"] - prediction = prediction.resize((image_width, image_height)) - masks = prediction.get_field("mask") - # t = time.time() - # Masker is necessary only if masks haven't been already resized. - if list(masks.shape[-2:]) != [image_height, image_width]: - masks = masker(masks.expand(1, -1, -1, -1, -1), prediction) - masks = masks[0] - # logger.info('Time mask: {}'.format(time.time() - t)) - # prediction = prediction.convert('xywh') - - # boxes = prediction.bbox.tolist() - scores = prediction.get_field("scores").tolist() - labels = prediction.get_field("labels").tolist() - - # rles = prediction.get_field('mask') - - rles = [ - mask_util.encode(np.array(mask[0, :, :, np.newaxis], order="F"))[0] - for mask in masks - ] - for rle in rles: - rle["counts"] = rle["counts"].decode("utf-8") - - mapped_labels = [dataset.contiguous_category_id_to_json_id[i] for i in labels] - - coco_results.extend( - [ - { - "image_id": original_id, - "category_id": mapped_labels[k], - "segmentation": rle, - "score": scores[k], - } - for k, rle in enumerate(rles) - ] - ) - return coco_results - - -def prepare_for_coco_keypoint(predictions, dataset): - # assert isinstance(dataset, COCODataset) - coco_results = [] - for image_id, prediction in enumerate(predictions): - original_id = dataset.id_to_img_map[image_id] - if len(prediction.bbox) == 0: - continue - - # TODO replace with get_img_info? - image_width = dataset.coco.imgs[original_id]['width'] - image_height = dataset.coco.imgs[original_id]['height'] - prediction = prediction.resize((image_width, image_height)) - prediction = prediction.convert('xywh') - - boxes = prediction.bbox.tolist() - scores = prediction.get_field('scores').tolist() - labels = prediction.get_field('labels').tolist() - keypoints = prediction.get_field('keypoints') - keypoints = keypoints.resize((image_width, image_height)) - keypoints = keypoints.to_coco_format() - - mapped_labels = [dataset.contiguous_category_id_to_json_id[i] for i in labels] - - coco_results.extend([{ - 'image_id': original_id, - 'category_id': mapped_labels[k], - 'keypoints': keypoint, - 'score': scores[k]} for k, keypoint in enumerate(keypoints)]) - return coco_results - - -# inspired from Detectron -def evaluate_box_proposals( - predictions, dataset, thresholds=None, area="all", limit=None -): - """Evaluate detection proposal recall metrics. This function is a much - faster alternative to the official COCO API recall evaluation code. However, - it produces slightly different results. - """ - # Record max overlap value for each gt box - # Return vector of overlap values - areas = { - "all": 0, - "small": 1, - "medium": 2, - "large": 3, - "96-128": 4, - "128-256": 5, - "256-512": 6, - "512-inf": 7, - } - area_ranges = [ - [0 ** 2, 1e5 ** 2], # all - [0 ** 2, 32 ** 2], # small - [32 ** 2, 96 ** 2], # medium - [96 ** 2, 1e5 ** 2], # large - [96 ** 2, 128 ** 2], # 96-128 - [128 ** 2, 256 ** 2], # 128-256 - [256 ** 2, 512 ** 2], # 256-512 - [512 ** 2, 1e5 ** 2], - ] # 512-inf - assert area in areas, "Unknown area range: {}".format(area) - area_range = area_ranges[areas[area]] - gt_overlaps = [] - num_pos = 0 - - for image_id, prediction in enumerate(predictions): - original_id = dataset.id_to_img_map[image_id] - - # TODO replace with get_img_info? - image_width = dataset.coco.imgs[original_id]["width"] - image_height = dataset.coco.imgs[original_id]["height"] - prediction = prediction.resize((image_width, image_height)) - - # sort predictions in descending order - # TODO maybe remove this and make it explicit in the documentation - if prediction.has_field("objectness"): - inds = prediction.get_field("objectness").sort(descending=True)[1] - else: - inds = prediction.get_field("scores").sort(descending=True)[1] - prediction = prediction[inds] - - ann_ids = dataset.coco.getAnnIds(imgIds=original_id) - anno = dataset.coco.loadAnns(ann_ids) - gt_boxes = [obj["bbox"] for obj in anno if obj["iscrowd"] == 0] - gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes - gt_boxes = BoxList(gt_boxes, (image_width, image_height), mode="xywh").convert( - "xyxy" - ) - gt_areas = torch.as_tensor([obj["area"] for obj in anno if obj["iscrowd"] == 0]) - - if len(gt_boxes) == 0: - continue - - valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1]) - gt_boxes = gt_boxes[valid_gt_inds] - - num_pos += len(gt_boxes) - - if len(gt_boxes) == 0: - continue - - if len(prediction) == 0: - continue - - if limit is not None and len(prediction) > limit: - prediction = prediction[:limit] - - overlaps = boxlist_iou(prediction, gt_boxes) - - _gt_overlaps = torch.zeros(len(gt_boxes)) - for j in range(min(len(prediction), len(gt_boxes))): - # find which proposal box maximally covers each gt box - # and get the iou amount of coverage for each gt box - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # find which gt box is 'best' covered (i.e. 'best' = most iou) - gt_ovr, gt_ind = max_overlaps.max(dim=0) - assert gt_ovr >= 0 - # find the proposal box that covers the best covered gt box - box_ind = argmax_overlaps[gt_ind] - # record the iou coverage of this gt box - _gt_overlaps[j] = overlaps[box_ind, gt_ind] - assert _gt_overlaps[j] == gt_ovr - # mark the proposal box and the gt box as used - overlaps[box_ind, :] = -1 - overlaps[:, gt_ind] = -1 - - # append recorded iou coverage level - gt_overlaps.append(_gt_overlaps) - - if len(gt_overlaps) == 0: - return { - "ar": torch.zeros(1), - "recalls": torch.zeros(1), - "thresholds": thresholds, - "gt_overlaps": gt_overlaps, - "num_pos": num_pos, - } - - gt_overlaps = torch.cat(gt_overlaps, dim=0) - gt_overlaps, _ = torch.sort(gt_overlaps) - - if thresholds is None: - step = 0.05 - thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32) - recalls = torch.zeros_like(thresholds) - # compute recall for each iou threshold - for i, t in enumerate(thresholds): - recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos) - # ar = 2 * np.trapz(recalls, thresholds) - ar = recalls.mean() - return { - "ar": ar, - "recalls": recalls, - "thresholds": thresholds, - "gt_overlaps": gt_overlaps, - "num_pos": num_pos, - } - - -def evaluate_predictions_on_coco( - coco_gt, coco_results, json_result_file, iou_type="bbox" -): - import json - - with open(json_result_file, "w") as f: - json.dump(coco_results, f) - - from pycocotools.coco import COCO - from pycocotools.cocoeval import COCOeval - - coco_dt = coco_gt.loadRes(str(json_result_file)) if coco_results else COCO() - - # coco_dt = coco_gt.loadRes(coco_results) - if iou_type == 'keypoints': - coco_gt = filter_valid_keypoints(coco_gt, coco_dt) - coco_eval = COCOeval(coco_gt, coco_dt, iou_type) - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - if iou_type == 'bbox': - summarize_per_category(coco_eval, json_result_file.replace('.json', '.csv')) - return coco_eval - - -def summarize_per_category(coco_eval, csv_output=None): - ''' - Compute and display summary metrics for evaluation results. - Note this functin can *only* be applied on the default parameter setting - ''' - - def _summarize(iouThr=None, areaRng='all', maxDets=100): - p = coco_eval.params - titleStr = 'Average Precision' - typeStr = '(AP)' - iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \ - if iouThr is None else '{:0.2f}'.format(iouThr) - result_str = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ], '. \ - format(titleStr, typeStr, iouStr, areaRng, maxDets) - - aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng] - mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets] - - # dimension of precision: [TxRxKxAxM] - s = coco_eval.eval['precision'] - # IoU - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:, :, :, aind, mind] - - if len(s[s > -1]) == 0: - mean_s = -1 - else: - mean_s = np.mean(s[s > -1]) - # cacluate AP(average precision) for each category - num_classes = len(p.catIds) - avg_ap = 0.0 - for i in range(0, num_classes): - result_str += '{}, '.format(np.mean(s[:, :, i, :])) - avg_ap += np.mean(s[:, :, i, :]) - result_str += ('{} \n'.format(avg_ap / num_classes)) - return result_str - - id2name = {} - for _, cat in coco_eval.cocoGt.cats.items(): - id2name[cat['id']] = cat['name'] - title_str = 'metric, ' - for cid in coco_eval.params.catIds: - title_str += '{}, '.format(id2name[cid]) - title_str += 'avg \n' - - results = [title_str] - results.append(_summarize()) - results.append(_summarize(iouThr=.5, maxDets=coco_eval.params.maxDets[2])) - results.append(_summarize(areaRng='small', maxDets=coco_eval.params.maxDets[2])) - results.append(_summarize(areaRng='medium', maxDets=coco_eval.params.maxDets[2])) - results.append(_summarize(areaRng='large', maxDets=coco_eval.params.maxDets[2])) - - with open(csv_output, 'w') as f: - for result in results: - f.writelines(result) - - -def filter_valid_keypoints(coco_gt, coco_dt): - kps = coco_dt.anns[1]['keypoints'] - for id, ann in coco_gt.anns.items(): - ann['keypoints'][2::3] = [a * b for a, b in zip(ann['keypoints'][2::3], kps[2::3])] - ann['num_keypoints'] = sum(ann['keypoints'][2::3]) - return coco_gt - - -class COCOResults(object): - METRICS = { - "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl"], - "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl"], - "box_proposal": [ - "AR@100", - "ARs@100", - "ARm@100", - "ARl@100", - "AR@1000", - "ARs@1000", - "ARm@1000", - "ARl@1000", - ], - "keypoints": ["AP", "AP50", "AP75", "APm", "APl"], - } - - def __init__(self, *iou_types): - allowed_types = ("box_proposal", "bbox", "segm", "keypoints") - assert all(iou_type in allowed_types for iou_type in iou_types) - results = OrderedDict() - for iou_type in iou_types: - results[iou_type] = OrderedDict( - [(metric, -1) for metric in COCOResults.METRICS[iou_type]] - ) - self.results = results - - def update(self, coco_eval): - if coco_eval is None: - return - from pycocotools.cocoeval import COCOeval - - assert isinstance(coco_eval, COCOeval) - s = coco_eval.stats - iou_type = coco_eval.params.iouType - res = self.results[iou_type] - metrics = COCOResults.METRICS[iou_type] - for idx, metric in enumerate(metrics): - res[metric] = s[idx] - - def __repr__(self): - # TODO make it pretty - return repr(self.results) - - -def check_expected_results(results, expected_results, sigma_tol): - if not expected_results: - return - - logger = logging.getLogger("maskrcnn_benchmark.inference") - for task, metric, (mean, std) in expected_results: - actual_val = results.results[task][metric] - lo = mean - sigma_tol * std - hi = mean + sigma_tol * std - ok = (lo < actual_val) and (actual_val < hi) - msg = ( - "{} > {} sanity check (actual vs. expected): " - "{:.3f} vs. mean={:.4f}, std={:.4}, range=({:.4f}, {:.4f})" - ).format(task, metric, actual_val, mean, std, lo, hi) - if not ok: - msg = "FAIL: " + msg - logger.error(msg) - else: - msg = "PASS: " + msg - logger.info(msg) diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/predictor.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/predictor.py deleted file mode 100644 index 3175a08b3284faaa353efb1f8e4dce0d98ef83e2..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/predictor.py +++ /dev/null @@ -1,568 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import cv2 -import torch -import numpy as np -from torchvision import transforms as T - -from maskrcnn_benchmark.modeling.detector import build_detection_model -from maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer -from maskrcnn_benchmark.structures.image_list import to_image_list -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark.modeling.roi_heads.mask_head.inference import Masker -from maskrcnn_benchmark import layers as L -from maskrcnn_benchmark.utils import cv2_util - - -import timeit - -class COCODemo(object): - # COCO categories for pretty print - CATEGORIES = [ - "__background", - "person", - "bicycle", - "car", - "motorcycle", - "airplane", - "bus", - "train", - "truck", - "boat", - "traffic light", - "fire hydrant", - "stop sign", - "parking meter", - "bench", - "bird", - "cat", - "dog", - "horse", - "sheep", - "cow", - "elephant", - "bear", - "zebra", - "giraffe", - "backpack", - "umbrella", - "handbag", - "tie", - "suitcase", - "frisbee", - "skis", - "snowboard", - "sports ball", - "kite", - "baseball bat", - "baseball glove", - "skateboard", - "surfboard", - "tennis racket", - "bottle", - "wine glass", - "cup", - "fork", - "knife", - "spoon", - "bowl", - "banana", - "apple", - "sandwich", - "orange", - "broccoli", - "carrot", - "hot dog", - "pizza", - "donut", - "cake", - "chair", - "couch", - "potted plant", - "bed", - "dining table", - "toilet", - "tv", - "laptop", - "mouse", - "remote", - "keyboard", - "cell phone", - "microwave", - "oven", - "toaster", - "sink", - "refrigerator", - "book", - "clock", - "vase", - "scissors", - "teddy bear", - "hair drier", - "toothbrush", - ] - - def __init__( - self, - cfg, - confidence_threshold=0.7, - show_mask_heatmaps=False, - masks_per_dim=2, - min_image_size=None, - exclude_region=None, - ): - self.cfg = cfg.clone() - self.model = build_detection_model(cfg) - self.model.eval() - self.device = torch.device(cfg.MODEL.DEVICE) - self.model.to(self.device) - self.min_image_size = min_image_size - - save_dir = cfg.OUTPUT_DIR - checkpointer = DetectronCheckpointer(cfg, self.model, save_dir=save_dir) - _ = checkpointer.load(cfg.MODEL.WEIGHT) - - self.transforms = self.build_transform() - - mask_threshold = -1 if show_mask_heatmaps else 0.5 - self.masker = Masker(threshold=mask_threshold, padding=1) - - # used to make colors for each class - self.palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1]) - - self.cpu_device = torch.device("cpu") - self.confidence_threshold = confidence_threshold - self.show_mask_heatmaps = show_mask_heatmaps - self.masks_per_dim = masks_per_dim - self.exclude_region = exclude_region - - def build_transform(self): - """ - Creates a basic transformation that was used to train the models - """ - cfg = self.cfg - - # we are loading images with OpenCV, so we don't need to convert them - # to BGR, they are already! So all we need to do is to normalize - # by 255 if we want to convert to BGR255 format, or flip the channels - # if we want it to be in RGB in [0-1] range. - if cfg.INPUT.TO_BGR255: - to_bgr_transform = T.Lambda(lambda x: x * 255) - else: - to_bgr_transform = T.Lambda(lambda x: x[[2, 1, 0]]) - - normalize_transform = T.Normalize( - mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD - ) - - transform = T.Compose( - [ - T.ToPILImage(), - T.Resize(self.min_image_size) if self.min_image_size is not None else lambda x:x, - T.ToTensor(), - to_bgr_transform, - normalize_transform, - ] - ) - return transform - - def inference(self, image, debug=False): - """ - Arguments: - image (np.ndarray): an image as returned by OpenCV - - Returns: - prediction (BoxList): the detected objects. Additional information - of the detection properties can be found in the fields of - the BoxList via `prediction.fields()` - """ - predictions, debug_info = self.compute_prediction(image) - top_predictions = self.select_top_predictions(predictions) - - if debug: - return top_predictions, debug_info - else: - return top_predictions - - def run_on_opencv_image(self, image): - """ - Arguments: - image (np.ndarray): an image as returned by OpenCV - - Returns: - prediction (BoxList): the detected objects. Additional information - of the detection properties can be found in the fields of - the BoxList via `prediction.fields()` - """ - predictions, debug_info = self.compute_prediction(image) - top_predictions = self.select_top_predictions(predictions) - - result = image.copy() - if self.show_mask_heatmaps: - return self.create_mask_montage(result, top_predictions) - result = self.overlay_boxes(result, top_predictions) - if self.cfg.MODEL.MASK_ON: - result = self.overlay_mask(result, top_predictions) - if self.cfg.MODEL.KEYPOINT_ON: - result = self.overlay_keypoints(result, top_predictions) - result = self.overlay_class_names(result, top_predictions) - - return result, debug_info, top_predictions - - def compute_prediction(self, original_image): - """ - Arguments: - original_image (np.ndarray): an image as returned by OpenCV - - Returns: - prediction (BoxList): the detected objects. Additional information - of the detection properties can be found in the fields of - the BoxList via `prediction.fields()` - """ - # apply pre-processing to image - # if self.exclude_region: - # for region in self.exclude_region: - # original_image[region[1]:region[3], region[0]:region[2], :] = 255 - image = self.transforms(original_image) - - - # convert to an ImageList, padded so that it is divisible by - # cfg.DATALOADER.SIZE_DIVISIBILITY - image_list = to_image_list(image, self.cfg.DATALOADER.SIZE_DIVISIBILITY) - image_list = image_list.to(self.device) - tic = timeit.time.perf_counter() - - # compute predictions - with torch.no_grad(): - predictions, debug_info = self.model(image_list) - predictions = [o.to(self.cpu_device) for o in predictions] - debug_info['total_time'] = timeit.time.perf_counter() - tic - - # always single image is passed at a time - prediction = predictions[0] - - # reshape prediction (a BoxList) into the original image size - height, width = original_image.shape[:-1] - prediction = prediction.resize((width, height)) - - if prediction.has_field("mask"): - # if we have masks, paste the masks in the right position - # in the image, as defined by the bounding boxes - masks = prediction.get_field("mask") - # always single image is passed at a time - masks = self.masker([masks], [prediction])[0] - prediction.add_field("mask", masks) - - return prediction, debug_info - - def select_top_predictions(self, predictions): - """ - Select only predictions which have a `score` > self.confidence_threshold, - and returns the predictions in descending order of score - - Arguments: - predictions (BoxList): the result of the computation by the model. - It should contain the field `scores`. - - Returns: - prediction (BoxList): the detected objects. Additional information - of the detection properties can be found in the fields of - the BoxList via `prediction.fields()` - """ - - scores = predictions.get_field("scores") - labels = predictions.get_field("labels").tolist() - thresh = scores.clone() - for i,lb in enumerate(labels): - if isinstance(self.confidence_threshold, float): - thresh[i] = self.confidence_threshold - elif len(self.confidence_threshold)==1: - thresh[i] = self.confidence_threshold[0] - else: - thresh[i] = self.confidence_threshold[lb-1] - keep = torch.nonzero(scores > thresh).squeeze(1) - predictions = predictions[keep] - - if self.exclude_region: - exlude = BoxList(self.exclude_region, predictions.size) - iou = boxlist_iou(exlude, predictions) - keep = torch.nonzero(torch.sum(iou>0.5, dim=0)==0).squeeze(1) - if len(keep)>0: - predictions = predictions[keep] - - scores = predictions.get_field("scores") - _, idx = scores.sort(0, descending=True) - return predictions[idx] - - def compute_colors_for_labels(self, labels): - """ - Simple function that adds fixed colors depending on the class - """ - colors = (30*(labels[:, None] -1)+1)*self.palette - colors = (colors % 255).numpy().astype("uint8") - return colors - - def overlay_boxes(self, image, predictions): - """ - Adds the predicted boxes on top of the image - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `labels`. - """ - labels = predictions.get_field("labels") - boxes = predictions.bbox - - colors = self.compute_colors_for_labels(labels).tolist() - - for box, color in zip(boxes, colors): - box = box.to(torch.int64) - top_left, bottom_right = box[:2].tolist(), box[2:].tolist() - image = cv2.rectangle( - image, tuple(top_left), tuple(bottom_right), tuple(color), 2) - - return image - - def overlay_scores(self, image, predictions): - """ - Adds the predicted boxes on top of the image - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `labels`. - """ - scores = predictions.get_field("scores") - boxes = predictions.bbox - - for box, score in zip(boxes, scores): - box = box.to(torch.int64) - image = cv2.putText(image, '%.3f'%score, - (box[0], (box[1]+box[3])/2), - cv2.FONT_HERSHEY_SIMPLEX, 0.5, - (255,255,255), 1) - - return image - - def overlay_cboxes(self, image, predictions): - """ - Adds the predicted boxes on top of the image - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `labels`. - """ - scores = predictions.get_field("scores") - boxes = predictions.bbox - for box, score in zip(boxes, scores): - box = box.to(torch.int64) - top_left, bottom_right = box[:2].tolist(), box[2:].tolist() - image = cv2.rectangle( - image, tuple(top_left), tuple(bottom_right), (255,0,0), 2) - image = cv2.putText(image, '%.3f'%score, - (box[0], (box[1]+box[3])/2), - cv2.FONT_HERSHEY_SIMPLEX, 0.5, - (255,0,0), 1) - return image - - def overlay_centers(self, image, predictions): - """ - Adds the predicted boxes on top of the image - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `labels`. - """ - centers = predictions.get_field("centers") - - for cord in centers: - cord = cord.to(torch.int64) - image = cv2.circle(image, (cord[0].item(),cord[1].item()), - 2, (255,0,0), 20) - - return image - - def overlay_count(self, image, predictions): - """ - Adds the predicted boxes on top of the image - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `labels`. - """ - if isinstance(predictions, int): - count = predictions - else: - count = len(predictions) - image = cv2.putText(image, 'Count: %d'%count, (0,100), cv2.FONT_HERSHEY_SIMPLEX, 3, (255,0,0), 3) - - return image - - def overlay_mask(self, image, predictions): - """ - Adds the instances contours for each predicted object. - Each label has a different color. - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `mask` and `labels`. - """ - masks = predictions.get_field("mask").numpy() - labels = predictions.get_field("labels") - - colors = self.compute_colors_for_labels(labels).tolist() - - for mask, color in zip(masks, colors): - thresh = mask[0, :, :, None].astype(np.uint8) - contours, hierarchy = cv2_util.findContours( - thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE - ) - image = cv2.drawContours(image, contours, -1, color, 3) - - composite = image - - return composite - - def overlay_keypoints(self, image, predictions): - keypoints = predictions.get_field("keypoints") - kps = keypoints.keypoints - scores = keypoints.get_field("logits") - kps = torch.cat((kps[:, :, 0:2], scores[:, :, None]), dim=2).numpy() - for region in kps: - image = vis_keypoints(image, region.transpose((1, 0)), - names=keypoints.NAMES, connections=keypoints.CONNECTIONS) - return image - - def create_mask_montage(self, image, predictions): - """ - Create a montage showing the probability heatmaps for each one one of the - detected objects - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `mask`. - """ - masks = predictions.get_field("mask") - masks_per_dim = self.masks_per_dim - masks = L.interpolate( - masks.float(), scale_factor=1 / masks_per_dim - ).byte() - height, width = masks.shape[-2:] - max_masks = masks_per_dim ** 2 - masks = masks[:max_masks] - # handle case where we have less detections than max_masks - if len(masks) < max_masks: - masks_padded = torch.zeros(max_masks, 1, height, width, dtype=torch.uint8) - masks_padded[: len(masks)] = masks - masks = masks_padded - masks = masks.reshape(masks_per_dim, masks_per_dim, height, width) - result = torch.zeros( - (masks_per_dim * height, masks_per_dim * width), dtype=torch.uint8 - ) - for y in range(masks_per_dim): - start_y = y * height - end_y = (y + 1) * height - for x in range(masks_per_dim): - start_x = x * width - end_x = (x + 1) * width - result[start_y:end_y, start_x:end_x] = masks[y, x] - return cv2.applyColorMap(result.numpy(), cv2.COLORMAP_JET) - - def overlay_class_names(self, image, predictions, names=None): - """ - Adds detected class names and scores in the positions defined by the - top-left corner of the predicted bounding box - - Arguments: - image (np.ndarray): an image as returned by OpenCV - predictions (BoxList): the result of the computation by the model. - It should contain the field `scores` and `labels`. - """ - scores = predictions.get_field("scores").tolist() - labels = predictions.get_field("labels").tolist() - if names: - labels = [names[i-1] for i in labels] - else: - labels = [self.CATEGORIES[i] for i in labels] - boxes = predictions.bbox - - template = "{}: {:.2f}" - for box, score, label in zip(boxes, scores, labels): - x, y = box[:2] - s = template.format(label, score) - cv2.putText( - image, s, (x, y), cv2.FONT_HERSHEY_SIMPLEX, .5, (255, 255, 255), 1 - ) - - return image - -def vis_keypoints(img, kps, kp_thresh=0, alpha=0.7, names=None, connections=None): - """Visualizes keypoints (adapted from vis_one_image). - kps has shape (4, #keypoints) where 4 rows are (x, y, logit, prob). - """ - - dataset_keypoints = names - kp_lines = connections - - # simple rainbow color map implementation - blue_red_ratio = 0.8 - gx = lambda x: (6-2*blue_red_ratio)*x + blue_red_ratio - colors = [[256*max(0, (3-abs(gx(i)-4)-abs(gx(i)-5))/2), - 256*max(0, (3-abs(gx(i)-2)-abs(gx(i)-4))/2), - 256*max(0, (3-abs(gx(i)-1)-abs(gx(i)-2))/2),] for i in np.linspace(0, 1, len(kp_lines) + 2)] - - # Perform the drawing on a copy of the image, to allow for blending. - kp_mask = np.copy(img) - - # Draw mid shoulder / mid hip first for better visualization. - mid_shoulder = ( - kps[:2, dataset_keypoints.index('right_shoulder')] + - kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0 - sc_mid_shoulder = np.minimum( - kps[2, dataset_keypoints.index('right_shoulder')], - kps[2, dataset_keypoints.index('left_shoulder')]) - nose_idx = dataset_keypoints.index('nose') - if sc_mid_shoulder > kp_thresh and kps[2, nose_idx] > kp_thresh: - cv2.line( - kp_mask, tuple(mid_shoulder), tuple(kps[:2, nose_idx]), - color=colors[len(kp_lines)], thickness=2, lineType=cv2.LINE_AA) - - if 'right_hip' in names and 'left_hip' in names: - mid_hip = ( - kps[:2, dataset_keypoints.index('right_hip')] + - kps[:2, dataset_keypoints.index('left_hip')]) / 2.0 - sc_mid_hip = np.minimum( - kps[2, dataset_keypoints.index('right_hip')], - kps[2, dataset_keypoints.index('left_hip')]) - if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh: - cv2.line( - kp_mask, tuple(mid_shoulder), tuple(mid_hip), - color=colors[len(kp_lines) + 1], thickness=2, lineType=cv2.LINE_AA) - - # Draw the keypoints. - for l in range(len(kp_lines)): - i1 = kp_lines[l][0] - i2 = kp_lines[l][1] - p1 = kps[0, i1], kps[1, i1] - p2 = kps[0, i2], kps[1, i2] - if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh: - cv2.line( - kp_mask, p1, p2, - color=colors[l], thickness=2, lineType=cv2.LINE_AA) - if kps[2, i1] > kp_thresh: - cv2.circle( - kp_mask, p1, - radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA) - if kps[2, i2] > kp_thresh: - cv2.circle( - kp_mask, p2, - radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA) - - # Blend the keypoints. - return cv2.addWeighted(img, 1.0 - alpha, kp_mask, alpha, 0) \ No newline at end of file diff --git a/spaces/Potato-ML/Spaceship_Titanic/README.md b/spaces/Potato-ML/Spaceship_Titanic/README.md deleted file mode 100644 index 4f9fac8e3cab805833fc7706ef4d0d6101d2b306..0000000000000000000000000000000000000000 --- a/spaces/Potato-ML/Spaceship_Titanic/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Test -emoji: 💻 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CHANGELOG.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CHANGELOG.md deleted file mode 100644 index aabf9130b0a67aca9beaac9f2cb1a40237a4468d..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CHANGELOG.md +++ /dev/null @@ -1,28 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [1.0.0] - 2023-08-02 - -Major revision, added training code for EnCodec, AudioGen, MusicGen, and MultiBandDiffusion. -Added pretrained model for AudioGen and MultiBandDiffusion. - -## [0.0.2] - 2023-08-01 - -Improved demo, fixed top p (thanks @jnordberg). - -Compressor tanh on output to avoid clipping with some style (especially piano). -Now repeating the conditioning periodically if it is too short. - -More options when launching Gradio app locally (thanks @ashleykleynhans). - -Testing out PyTorch 2.0 memory efficient attention. - -Added extended generation (infinite length) by slowly moving the windows. -Note that other implementations exist: https://github.com/camenduru/MusicGen-colab. - -## [0.0.1] - 2023-06-09 - -Initial release, with model evaluation only. diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/adversarial/__init__.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/adversarial/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/adversarial/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/main.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/main.py deleted file mode 100644 index 3d83cb21c1dc5d0f5d7f396479c74e64691ec364..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/main.py +++ /dev/null @@ -1,585 +0,0 @@ -import argparse, os, sys, datetime, glob, importlib -from omegaconf import OmegaConf -import numpy as np -from PIL import Image -import torch -import torchvision -from torch.utils.data import random_split, DataLoader, Dataset -import pytorch_lightning as pl -from pytorch_lightning import seed_everything -from pytorch_lightning.trainer import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint, Callback, LearningRateMonitor -from pytorch_lightning.utilities.distributed import rank_zero_only - -from taming.data.utils import custom_collate - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def get_parser(**parser_kwargs): - def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - parser = argparse.ArgumentParser(**parser_kwargs) - parser.add_argument( - "-n", - "--name", - type=str, - const=True, - default="", - nargs="?", - help="postfix for logdir", - ) - parser.add_argument( - "-r", - "--resume", - type=str, - const=True, - default="", - nargs="?", - help="resume from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-b", - "--base", - nargs="*", - metavar="base_config.yaml", - help="paths to base configs. Loaded from left-to-right. " - "Parameters can be overwritten or added with command-line options of the form `--key value`.", - default=list(), - ) - parser.add_argument( - "-t", - "--train", - type=str2bool, - const=True, - default=False, - nargs="?", - help="train", - ) - parser.add_argument( - "--no-test", - type=str2bool, - const=True, - default=False, - nargs="?", - help="disable test", - ) - parser.add_argument("-p", "--project", help="name of new or path to existing project") - parser.add_argument( - "-d", - "--debug", - type=str2bool, - nargs="?", - const=True, - default=False, - help="enable post-mortem debugging", - ) - parser.add_argument( - "-s", - "--seed", - type=int, - default=23, - help="seed for seed_everything", - ) - parser.add_argument( - "-f", - "--postfix", - type=str, - default="", - help="post-postfix for default name", - ) - - return parser - - -def nondefault_trainer_args(opt): - parser = argparse.ArgumentParser() - parser = Trainer.add_argparse_args(parser) - args = parser.parse_args([]) - return sorted(k for k in vars(args) if getattr(opt, k) != getattr(args, k)) - - -def instantiate_from_config(config): - if not "target" in config: - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -class WrappedDataset(Dataset): - """Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset""" - def __init__(self, dataset): - self.data = dataset - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - return self.data[idx] - - -class DataModuleFromConfig(pl.LightningDataModule): - def __init__(self, batch_size, train=None, validation=None, test=None, - wrap=False, num_workers=None): - super().__init__() - self.batch_size = batch_size - self.dataset_configs = dict() - self.num_workers = num_workers if num_workers is not None else batch_size*2 - if train is not None: - self.dataset_configs["train"] = train - self.train_dataloader = self._train_dataloader - if validation is not None: - self.dataset_configs["validation"] = validation - self.val_dataloader = self._val_dataloader - if test is not None: - self.dataset_configs["test"] = test - self.test_dataloader = self._test_dataloader - self.wrap = wrap - - def prepare_data(self): - for data_cfg in self.dataset_configs.values(): - instantiate_from_config(data_cfg) - - def setup(self, stage=None): - self.datasets = dict( - (k, instantiate_from_config(self.dataset_configs[k])) - for k in self.dataset_configs) - if self.wrap: - for k in self.datasets: - self.datasets[k] = WrappedDataset(self.datasets[k]) - - def _train_dataloader(self): - return DataLoader(self.datasets["train"], batch_size=self.batch_size, - num_workers=self.num_workers, shuffle=True, collate_fn=custom_collate) - - def _val_dataloader(self): - return DataLoader(self.datasets["validation"], - batch_size=self.batch_size, - num_workers=self.num_workers, collate_fn=custom_collate) - - def _test_dataloader(self): - return DataLoader(self.datasets["test"], batch_size=self.batch_size, - num_workers=self.num_workers, collate_fn=custom_collate) - - -class SetupCallback(Callback): - def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config): - super().__init__() - self.resume = resume - self.now = now - self.logdir = logdir - self.ckptdir = ckptdir - self.cfgdir = cfgdir - self.config = config - self.lightning_config = lightning_config - - def on_pretrain_routine_start(self, trainer, pl_module): - if trainer.global_rank == 0: - # Create logdirs and save configs - os.makedirs(self.logdir, exist_ok=True) - os.makedirs(self.ckptdir, exist_ok=True) - os.makedirs(self.cfgdir, exist_ok=True) - - print("Project config") - print(self.config.pretty()) - OmegaConf.save(self.config, - os.path.join(self.cfgdir, "{}-project.yaml".format(self.now))) - - print("Lightning config") - print(self.lightning_config.pretty()) - OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}), - os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now))) - - else: - # ModelCheckpoint callback created log directory --- remove it - if not self.resume and os.path.exists(self.logdir): - dst, name = os.path.split(self.logdir) - dst = os.path.join(dst, "child_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - try: - os.rename(self.logdir, dst) - except FileNotFoundError: - pass - - -class ImageLogger(Callback): - def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True): - super().__init__() - self.batch_freq = batch_frequency - self.max_images = max_images - self.logger_log_images = { - pl.loggers.WandbLogger: self._wandb, - pl.loggers.TestTubeLogger: self._testtube, - } - self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)] - if not increase_log_steps: - self.log_steps = [self.batch_freq] - self.clamp = clamp - - @rank_zero_only - def _wandb(self, pl_module, images, batch_idx, split): - raise ValueError("No way wandb") - grids = dict() - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grids[f"{split}/{k}"] = wandb.Image(grid) - pl_module.logger.experiment.log(grids) - - @rank_zero_only - def _testtube(self, pl_module, images, batch_idx, split): - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grid = (grid+1.0)/2.0 # -1,1 -> 0,1; c,h,w - - tag = f"{split}/{k}" - pl_module.logger.experiment.add_image( - tag, grid, - global_step=pl_module.global_step) - - @rank_zero_only - def log_local(self, save_dir, split, images, - global_step, current_epoch, batch_idx): - root = os.path.join(save_dir, "images", split) - for k in images: - grid = torchvision.utils.make_grid(images[k], nrow=4) - - grid = (grid+1.0)/2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0,1).transpose(1,2).squeeze(-1) - grid = grid.numpy() - grid = (grid*255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format( - k, - global_step, - current_epoch, - batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - - def log_img(self, pl_module, batch, batch_idx, split="train"): - if (self.check_frequency(batch_idx) and # batch_idx % self.batch_freq == 0 - hasattr(pl_module, "log_images") and - callable(pl_module.log_images) and - self.max_images > 0): - logger = type(pl_module.logger) - - is_train = pl_module.training - if is_train: - pl_module.eval() - - with torch.no_grad(): - images = pl_module.log_images(batch, split=split, pl_module=pl_module) - - for k in images: - N = min(images[k].shape[0], self.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if self.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - - self.log_local(pl_module.logger.save_dir, split, images, - pl_module.global_step, pl_module.current_epoch, batch_idx) - - logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None) - logger_log_images(pl_module, images, pl_module.global_step, split) - - if is_train: - pl_module.train() - - def check_frequency(self, batch_idx): - if (batch_idx % self.batch_freq) == 0 or (batch_idx in self.log_steps): - try: - self.log_steps.pop(0) - except IndexError: - pass - return True - return False - - def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - self.log_img(pl_module, batch, batch_idx, split="train") - - def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - self.log_img(pl_module, batch, batch_idx, split="val") - - - -if __name__ == "__main__": - # custom parser to specify config files, train, test and debug mode, - # postfix, resume. - # `--key value` arguments are interpreted as arguments to the trainer. - # `nested.key=value` arguments are interpreted as config parameters. - # configs are merged from left-to-right followed by command line parameters. - - # model: - # base_learning_rate: float - # target: path to lightning module - # params: - # key: value - # data: - # target: main.DataModuleFromConfig - # params: - # batch_size: int - # wrap: bool - # train: - # target: path to train dataset - # params: - # key: value - # validation: - # target: path to validation dataset - # params: - # key: value - # test: - # target: path to test dataset - # params: - # key: value - # lightning: (optional, has sane defaults and can be specified on cmdline) - # trainer: - # additional arguments to trainer - # logger: - # logger to instantiate - # modelcheckpoint: - # modelcheckpoint to instantiate - # callbacks: - # callback1: - # target: importpath - # params: - # key: value - - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - - # add cwd for convenience and to make classes in this file available when - # running as `python main.py` - # (in particular `main.DataModuleFromConfig`) - sys.path.append(os.getcwd()) - - parser = get_parser() - parser = Trainer.add_argparse_args(parser) - - opt, unknown = parser.parse_known_args() - if opt.name and opt.resume: - raise ValueError( - "-n/--name and -r/--resume cannot be specified both." - "If you want to resume training in a new log folder, " - "use -n/--name in combination with --resume_from_checkpoint" - ) - if opt.resume: - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - paths = opt.resume.split("/") - idx = len(paths)-paths[::-1].index("logs")+1 - logdir = "/".join(paths[:idx]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), opt.resume - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "checkpoints", "last.ckpt") - - opt.resume_from_checkpoint = ckpt - base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml"))) - opt.base = base_configs+opt.base - _tmp = logdir.split("/") - nowname = _tmp[_tmp.index("logs")+1] - else: - if opt.name: - name = "_"+opt.name - elif opt.base: - cfg_fname = os.path.split(opt.base[0])[-1] - cfg_name = os.path.splitext(cfg_fname)[0] - name = "_"+cfg_name - else: - name = "" - nowname = now+name+opt.postfix - logdir = os.path.join("logs", nowname) - - ckptdir = os.path.join(logdir, "checkpoints") - cfgdir = os.path.join(logdir, "configs") - seed_everything(opt.seed) - - try: - # init and save configs - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - config = OmegaConf.merge(*configs, cli) - lightning_config = config.pop("lightning", OmegaConf.create()) - # merge trainer cli with config - trainer_config = lightning_config.get("trainer", OmegaConf.create()) - # default to ddp - trainer_config["distributed_backend"] = "ddp" - for k in nondefault_trainer_args(opt): - trainer_config[k] = getattr(opt, k) - if not "gpus" in trainer_config: - del trainer_config["distributed_backend"] - cpu = True - else: - gpuinfo = trainer_config["gpus"] - print(f"Running on GPUs {gpuinfo}") - cpu = False - trainer_opt = argparse.Namespace(**trainer_config) - lightning_config.trainer = trainer_config - - # model - model = instantiate_from_config(config.model) - - # trainer and callbacks - trainer_kwargs = dict() - - # default logger configs - # NOTE wandb < 0.10.0 interferes with shutdown - # wandb >= 0.10.0 seems to fix it but still interferes with pudb - # debugging (wrongly sized pudb ui) - # thus prefer testtube for now - default_logger_cfgs = { - "wandb": { - "target": "pytorch_lightning.loggers.WandbLogger", - "params": { - "name": nowname, - "save_dir": logdir, - "offline": opt.debug, - "id": nowname, - } - }, - "testtube": { - "target": "pytorch_lightning.loggers.TestTubeLogger", - "params": { - "name": "testtube", - "save_dir": logdir, - } - }, - } - default_logger_cfg = default_logger_cfgs["testtube"] - logger_cfg = lightning_config.logger or OmegaConf.create() - logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg) - trainer_kwargs["logger"] = instantiate_from_config(logger_cfg) - - # modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to - # specify which metric is used to determine best models - default_modelckpt_cfg = { - "target": "pytorch_lightning.callbacks.ModelCheckpoint", - "params": { - "dirpath": ckptdir, - "filename": "{epoch:06}", - "verbose": True, - "save_last": True, - } - } - if hasattr(model, "monitor"): - print(f"Monitoring {model.monitor} as checkpoint metric.") - default_modelckpt_cfg["params"]["monitor"] = model.monitor - default_modelckpt_cfg["params"]["save_top_k"] = 3 - - modelckpt_cfg = lightning_config.modelcheckpoint or OmegaConf.create() - modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg) - trainer_kwargs["checkpoint_callback"] = instantiate_from_config(modelckpt_cfg) - - # add callback which sets up log directory - default_callbacks_cfg = { - "setup_callback": { - "target": "main.SetupCallback", - "params": { - "resume": opt.resume, - "now": now, - "logdir": logdir, - "ckptdir": ckptdir, - "cfgdir": cfgdir, - "config": config, - "lightning_config": lightning_config, - } - }, - "image_logger": { - "target": "main.ImageLogger", - "params": { - "batch_frequency": 750, - "max_images": 4, - "clamp": True - } - }, - "learning_rate_logger": { - "target": "main.LearningRateMonitor", - "params": { - "logging_interval": "step", - #"log_momentum": True - } - }, - } - callbacks_cfg = lightning_config.callbacks or OmegaConf.create() - callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg) - trainer_kwargs["callbacks"] = [instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg] - - trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs) - - # data - data = instantiate_from_config(config.data) - # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html - # calling these ourselves should not be necessary but it is. - # lightning still takes care of proper multiprocessing though - data.prepare_data() - data.setup() - - # configure learning rate - bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate - if not cpu: - ngpu = len(lightning_config.trainer.gpus.strip(",").split(',')) - else: - ngpu = 1 - accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches or 1 - print(f"accumulate_grad_batches = {accumulate_grad_batches}") - lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches - model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr - print("Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format( - model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr)) - - # allow checkpointing via USR1 - def melk(*args, **kwargs): - # run all checkpoint hooks - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - def divein(*args, **kwargs): - if trainer.global_rank == 0: - import pudb; pudb.set_trace() - - import signal - signal.signal(signal.SIGUSR1, melk) - signal.signal(signal.SIGUSR2, divein) - - # run - if opt.train: - try: - trainer.fit(model, data) - except Exception: - melk() - raise - if not opt.no_test and not trainer.interrupted: - trainer.test(model, data) - except Exception: - if opt.debug and trainer.global_rank==0: - try: - import pudb as debugger - except ImportError: - import pdb as debugger - debugger.post_mortem() - raise - finally: - # move newly created debug project to debug_runs - if opt.debug and not opt.resume and trainer.global_rank==0: - dst, name = os.path.split(logdir) - dst = os.path.join(dst, "debug_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - os.rename(logdir, dst) diff --git a/spaces/Raaniel/Audiomaister/app.py b/spaces/Raaniel/Audiomaister/app.py deleted file mode 100644 index e36183200f504cd2233abe7f90a11e50595fcf88..0000000000000000000000000000000000000000 --- a/spaces/Raaniel/Audiomaister/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import torch -import os -import gradio as gr - -from audiomaister import VoiceFixer -from audiomaister.models.gs_audiomaister import AudioMaister - -USE_CUDA = torch.cuda.is_available() - -def load_default_weights(): - from huggingface_hub import hf_hub_download - from pathlib import Path - - REPO_ID = "peterwilli/audio-maister" - print(f"Loading standard model weight at {REPO_ID}") - MODEL_FILE_NAME = "audiomaister_v1.5.ckpt" - checkpoint_path = hf_hub_download(repo_id=REPO_ID, filename=MODEL_FILE_NAME) - return checkpoint_path - -def inference(input_file, **kwargs): - checkpoint = load_default_weights() - state = torch.load(checkpoint, map_location=torch.device('cuda' if USE_CUDA else 'cpu')) - - main_model = VoiceFixer(state['hparams'], 1, 'vocals') - main_model.load_state_dict(state['weights']) - - inference_model = AudioMaister(main_model) - inference_model.restore(input=input_file, output="out.wav", mode=0) - - if USE_CUDA: - main_model.to('cuda') - inference_model.to('cuda') - - return "out.wav" - -made ="""
-

Made with ❤ by Raaniel

""" - -desc = """
AudiomAIster is a fork of VoiceFixer -that focuses on general purpose audio (where VoiceFixer focusses on voice alone, -AudiomAIster will reduce noise while enhancing voice and sound effects, -making it more suitable for talks and videos where more than just voices exist). -

- -See the model main repository -
-""" - -gr.Interface( - fn=inference, - inputs=gr.Audio(type="filepath", source="upload", label = "Upload the audio that needs to be fixed!"), - outputs=gr.Audio(type = "filepath", label = "Your fixed audio is going to show up below: "), - title="🩺🎧 Fix all your podcast, video or live stream audio! 🎧🩺", - description = desc, - article = made, - theme=gr.themes.Soft(primary_hue="purple",secondary_hue="violet", neutral_hue="neutral") -).launch() \ No newline at end of file diff --git a/spaces/Rami/validate_chat_utd/Dockerfile b/spaces/Rami/validate_chat_utd/Dockerfile deleted file mode 100644 index b5a55ae3b5928ddc6dca732a5adb4b758c5e1512..0000000000000000000000000000000000000000 --- a/spaces/Rami/validate_chat_utd/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM python:3.8 - -RUN useradd -m -u 1000 user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH -WORKDIR $HOME/app -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app -ADD --chown=user ./.zeno_cache $HOME/app/.zeno_cache -RUN chown user:user -R $HOME/app - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - - -CMD ["zeno", "config.toml"] \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/detectors/loss.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/detectors/loss.py deleted file mode 100644 index 924bb896a66034ef45b11420ca6d48a462092ed1..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/detectors/loss.py +++ /dev/null @@ -1,452 +0,0 @@ -import torch -import torch.nn as nn -import math - -from DeDoDe.utils import * -import DeDoDe - - -class KeyPointLoss(nn.Module): - def __init__( - self, - smoothing_size=1, - use_max_logit=False, - entropy_target=80, - num_matches=1024, - jacobian_density_adjustment=False, - matchability_weight=1, - device="cuda", - ) -> None: - super().__init__() - X = torch.linspace(-1, 1, smoothing_size, device=device) - G = (-(X**2) / (2 * 1 / 2**2)).exp() - G = G / G.sum() - self.use_max_logit = use_max_logit - self.entropy_target = entropy_target - self.smoothing_kernel = G[None, None, None, :] - self.smoothing_size = smoothing_size - self.tracked_metrics = {} - self.center = None - self.num_matches = num_matches - self.jacobian_density_adjustment = jacobian_density_adjustment - self.matchability_weight = matchability_weight - - def compute_consistency(self, logits_A, logits_B_to_A, mask=None): - - masked_logits_A = torch.full_like(logits_A, -torch.inf) - masked_logits_A[mask] = logits_A[mask] - - masked_logits_B_to_A = torch.full_like(logits_B_to_A, -torch.inf) - masked_logits_B_to_A[mask] = logits_B_to_A[mask] - - log_p_A = masked_logits_A.log_softmax(dim=-1)[mask] - log_p_B_to_A = masked_logits_B_to_A.log_softmax(dim=-1)[mask] - - return self.compute_jensen_shannon_div(log_p_A, log_p_B_to_A) - - def compute_joint_neg_log_likelihood( - self, - logits_A, - logits_B_to_A, - detections_A=None, - detections_B_to_A=None, - mask=None, - device="cuda", - dtype=torch.float32, - num_matches=None, - ): - B, K, HW = logits_A.shape - logits_A, logits_B_to_A = logits_A.to(dtype), logits_B_to_A.to(dtype) - mask = mask[:, None].expand(B, K, HW).reshape(B, K * HW) - log_p_B_to_A = self.masked_log_softmax( - logits_B_to_A.reshape(B, K * HW), mask=mask - ) - log_p_A = self.masked_log_softmax(logits_A.reshape(B, K * HW), mask=mask) - log_p = log_p_A + log_p_B_to_A - if detections_A is None: - detections_A = torch.zeros_like(log_p_A) - if detections_B_to_A is None: - detections_B_to_A = torch.zeros_like(log_p_B_to_A) - detections_A = detections_A.reshape(B, HW) - detections_A[~mask] = 0 - detections_B_to_A = detections_B_to_A.reshape(B, HW) - detections_B_to_A[~mask] = 0 - log_p_target = log_p.detach() + 50 * detections_A + 50 * detections_B_to_A - num_matches = self.num_matches if num_matches is None else num_matches - best_k = -(-log_p_target).flatten().kthvalue(k=B * num_matches, dim=-1).values - p_target = (log_p_target > best_k[..., None]).float().reshape( - B, K * HW - ) / num_matches - return self.compute_cross_entropy( - log_p_A[mask], p_target[mask] - ) + self.compute_cross_entropy(log_p_B_to_A[mask], p_target[mask]) - - def compute_jensen_shannon_div(self, log_p, log_q): - return ( - 1 - / 2 - * (self.compute_kl_div(log_p, log_q) + self.compute_kl_div(log_q, log_p)) - ) - - def compute_kl_div(self, log_p, log_q): - return (log_p.exp() * (log_p - log_q)).sum(dim=-1) - - def masked_log_softmax(self, logits, mask): - masked_logits = torch.full_like(logits, -torch.inf) - masked_logits[mask] = logits[mask] - log_p = masked_logits.log_softmax(dim=-1) - return log_p - - def masked_softmax(self, logits, mask): - masked_logits = torch.full_like(logits, -torch.inf) - masked_logits[mask] = logits[mask] - log_p = masked_logits.softmax(dim=-1) - return log_p - - def compute_entropy(self, logits, mask=None): - p = self.masked_softmax(logits, mask)[mask] - log_p = self.masked_log_softmax(logits, mask)[mask] - return -(log_p * p).sum(dim=-1) - - def compute_detection_img(self, detections, mask, B, H, W, device="cuda"): - kernel_size = 5 - X = torch.linspace(-2, 2, kernel_size, device=device) - G = (-(X**2) / (2 * (1 / 2) ** 2)).exp() # half pixel std - G = G / G.sum() - det_smoothing_kernel = G[None, None, None, :] - det_img = torch.zeros( - (B, 1, H, W), device=device - ) # add small epsilon for later logstuff - for b in range(B): - valid_detections = (detections[b][mask[b]]).int() - det_img[b, 0][valid_detections[:, 1], valid_detections[:, 0]] = 1 - det_img = F.conv2d( - det_img, weight=det_smoothing_kernel, padding=(kernel_size // 2, 0) - ) - det_img = F.conv2d( - det_img, weight=det_smoothing_kernel.mT, padding=(0, kernel_size // 2) - ) - return det_img - - def compute_cross_entropy(self, log_p_hat, p): - return -(log_p_hat * p).sum(dim=-1) - - def compute_matchability(self, keypoint_p, has_depth, B, K, H, W, device="cuda"): - smooth_keypoint_p = F.conv2d( - keypoint_p.reshape(B, 1, H, W), - weight=self.smoothing_kernel, - padding=(self.smoothing_size // 2, 0), - ) - smooth_keypoint_p = F.conv2d( - smooth_keypoint_p, - weight=self.smoothing_kernel.mT, - padding=(0, self.smoothing_size // 2), - ) - log_p_hat = ( - (smooth_keypoint_p + 1e-8).log().reshape(B, H * W).log_softmax(dim=-1) - ) - smooth_has_depth = F.conv2d( - has_depth.reshape(B, 1, H, W), - weight=self.smoothing_kernel, - padding=(0, self.smoothing_size // 2), - ) - smooth_has_depth = F.conv2d( - smooth_has_depth, - weight=self.smoothing_kernel.mT, - padding=(self.smoothing_size // 2, 0), - ).reshape(B, H * W) - p = smooth_has_depth / smooth_has_depth.sum(dim=-1, keepdim=True) - return self.compute_cross_entropy(log_p_hat, p) - self.compute_cross_entropy( - (p + 1e-12).log(), p - ) - - def tracks_to_detections(self, tracks3D, pose, intrinsics, H, W): - tracks3D = tracks3D.double() - intrinsics = intrinsics.double() - bearing_vectors = pose[:, :3, :3] @ tracks3D.mT + pose[:, :3, 3:] - hom_pixel_coords = (intrinsics @ bearing_vectors).mT - pixel_coords = hom_pixel_coords[..., :2] / (hom_pixel_coords[..., 2:] + 1e-12) - legit_detections = ( - (pixel_coords > 0).prod(dim=-1) - * (pixel_coords[..., 0] < W - 1) - * (pixel_coords[..., 1] < H - 1) - * (tracks3D != 0).prod(dim=-1) - ) - return pixel_coords.float(), legit_detections.bool() - - def self_supervised_loss(self, outputs, batch): - keypoint_logits_A, keypoint_logits_B = outputs["keypoint_logits"].chunk(2) - B, K, H, W = keypoint_logits_A.shape - keypoint_logits_A = keypoint_logits_A.reshape(B, K, H * W) - keypoint_logits_B = keypoint_logits_B.reshape(B, K, H * W) - keypoint_logits = torch.cat((keypoint_logits_A, keypoint_logits_B)) - - warp_A_to_B, mask_A_to_B = get_homog_warp(batch["Homog_A_to_B"], H, W) - warp_B_to_A, mask_B_to_A = get_homog_warp( - torch.linalg.inv(batch["Homog_A_to_B"]), H, W - ) - B = 2 * B - - warp = torch.cat((warp_A_to_B, warp_B_to_A)).reshape(B, H * W, 4) - mask = torch.cat((mask_A_to_B, mask_B_to_A)).reshape(B, H * W) - - keypoint_logits_backwarped = F.grid_sample( - torch.cat((keypoint_logits_B, keypoint_logits_A)).reshape(B, K, H, W), - warp[..., -2:].reshape(B, H, W, 2).float(), - align_corners=False, - mode="bicubic", - ) - - keypoint_logits_backwarped = keypoint_logits_backwarped.reshape(B, K, H * W) - joint_log_likelihood_loss = self.compute_joint_neg_log_likelihood( - keypoint_logits, - keypoint_logits_backwarped, - mask=mask.bool(), - num_matches=5_000, - ).mean() - return joint_log_likelihood_loss - - def supervised_loss(self, outputs, batch): - keypoint_logits_A, keypoint_logits_B = outputs["keypoint_logits"].chunk(2) - B, K, H, W = keypoint_logits_A.shape - - detections_A, detections_B = batch["detections_A"], batch["detections_B"] - - tracks3D_A, tracks3D_B = batch["tracks3D_A"], batch["tracks3D_B"] - gt_warp_A_to_B, valid_mask_A_to_B = get_gt_warp( - batch["im_A_depth"], - batch["im_B_depth"], - batch["T_1to2"], - batch["K1"], - batch["K2"], - H=H, - W=W, - ) - gt_warp_B_to_A, valid_mask_B_to_A = get_gt_warp( - batch["im_B_depth"], - batch["im_A_depth"], - batch["T_1to2"].inverse(), - batch["K2"], - batch["K1"], - H=H, - W=W, - ) - keypoint_logits_A = keypoint_logits_A.reshape(B, K, H * W) - keypoint_logits_B = keypoint_logits_B.reshape(B, K, H * W) - keypoint_logits = torch.cat((keypoint_logits_A, keypoint_logits_B)) - - B = 2 * B - gt_warp = torch.cat((gt_warp_A_to_B, gt_warp_B_to_A)) - valid_mask = torch.cat((valid_mask_A_to_B, valid_mask_B_to_A)) - valid_mask = valid_mask.reshape(B, H * W) - binary_mask = valid_mask == 1 - if self.jacobian_density_adjustment: - j_logdet = jacobi_determinant( - gt_warp.reshape(B, H, W, 4), valid_mask.reshape(B, H, W).float() - )[:, None] - else: - j_logdet = 0 - tracks3D = torch.cat((tracks3D_A, tracks3D_B)) - - # detections, legit_detections = self.tracks_to_detections(tracks3D, torch.cat((batch["pose_A"],batch["pose_B"])), torch.cat((batch["K1"],batch["K2"])), H, W) - # detections_backwarped, legit_backwarped_detections = self.tracks_to_detections(torch.cat((tracks3D_B, tracks3D_A)), torch.cat((batch["pose_A"],batch["pose_B"])), torch.cat((batch["K1"],batch["K2"])), H, W) - detections = torch.cat((detections_A, detections_B)) - legit_detections = ( - (detections > 0).prod(dim=-1) - * (detections[..., 0] < W) - * (detections[..., 1] < H) - ).bool() - det_imgs_A, det_imgs_B = self.compute_detection_img( - detections, legit_detections, B, H, W - ).chunk(2) - det_imgs = torch.cat((det_imgs_A, det_imgs_B)) - # det_imgs_backwarped = self.compute_detection_img(detections_backwarped, legit_backwarped_detections, B, H, W) - det_imgs_backwarped = F.grid_sample( - torch.cat((det_imgs_B, det_imgs_A)).reshape(B, 1, H, W), - gt_warp[..., -2:].reshape(B, H, W, 2).float(), - align_corners=False, - mode="bicubic", - ) - - keypoint_logits_backwarped = F.grid_sample( - torch.cat((keypoint_logits_B, keypoint_logits_A)).reshape(B, K, H, W), - gt_warp[..., -2:].reshape(B, H, W, 2).float(), - align_corners=False, - mode="bicubic", - ) - - # Note: Below step should be taken, but seems difficult to get it to work well. - # keypoint_logits_B_to_A = keypoint_logits_B_to_A + j_logdet_A_to_B # adjust for the viewpoint by log jacobian of warp - keypoint_logits_backwarped = (keypoint_logits_backwarped + j_logdet).reshape( - B, K, H * W - ) - - depth = F.interpolate( - torch.cat( - (batch["im_A_depth"][:, None], batch["im_B_depth"][:, None]), dim=0 - ), - size=(H, W), - mode="bilinear", - align_corners=False, - ) - has_depth = (depth > 0).float().reshape(B, H * W) - - joint_log_likelihood_loss = self.compute_joint_neg_log_likelihood( - keypoint_logits, - keypoint_logits_backwarped, - mask=binary_mask, - detections_A=det_imgs, - detections_B_to_A=det_imgs_backwarped, - ).mean() - keypoint_p = ( - keypoint_logits.reshape(B, K * H * W) - .softmax(dim=-1) - .reshape(B, K, H * W) - .sum(dim=1) - ) - matchability_loss = self.compute_matchability( - keypoint_p, has_depth, B, K, H, W - ).mean() - - # peakiness_loss = self.compute_negative_peakiness(keypoint_logits.reshape(B,H,W), mask = binary_mask) - # mnn_loss = self.compute_mnn_loss(keypoint_logits_A, keypoint_logits_B, gt_warp_A_to_B, valid_mask_A_to_B, B, H, W) - B = B // 2 - import matplotlib.pyplot as plt - - kpts_A = sample_keypoints( - keypoint_p[:B].reshape(B, H, W), - use_nms=False, - sample_topk=True, - num_samples=4 * 2048, - ) - kpts_B = sample_keypoints( - keypoint_p[B:].reshape(B, H, W), - use_nms=False, - sample_topk=True, - num_samples=4 * 2048, - ) - kpts_A_to_B = F.grid_sample( - gt_warp_A_to_B[..., 2:].float().permute(0, 3, 1, 2), - kpts_A[..., None, :], - align_corners=False, - mode="bilinear", - )[..., 0].mT - legit_A_to_B = F.grid_sample( - valid_mask_A_to_B.reshape(B, 1, H, W), - kpts_A[..., None, :], - align_corners=False, - mode="bilinear", - )[..., 0, :, 0] - percent_inliers = ( - ( - torch.cdist(kpts_A_to_B, kpts_B).min(dim=-1).values[legit_A_to_B > 0] - < 0.01 - ) - .float() - .mean() - ) - self.tracked_metrics["mega_percent_inliers"] = ( - 0.9 * self.tracked_metrics.get("mega_percent_inliers", percent_inliers) - + 0.1 * percent_inliers - ) - - if torch.rand(1) > 0.995: - keypoint_logits_A_to_B = keypoint_logits_backwarped[:B] - import matplotlib.pyplot as plt - import os - - os.makedirs("vis", exist_ok=True) - for b in range(0, B, 2): - # import cv2 - plt.scatter( - kpts_A_to_B[b, :, 0].cpu(), -kpts_A_to_B[b, :, 1].cpu(), s=1 - ) - plt.scatter(kpts_B[b, :, 0].cpu(), -kpts_B[b, :, 1].cpu(), s=1) - plt.xlim(-1, 1) - plt.ylim(-1, 1) - plt.savefig(f"vis/keypoints_A_to_B_vs_B_{b}.png") - plt.close() - tensor_to_pil( - keypoint_logits_A[b] - .reshape(1, H, W) - .expand(3, H, W) - .detach() - .cpu(), - autoscale=True, - ).save(f"vis/logits_A_{b}.png") - tensor_to_pil( - keypoint_logits_B[b] - .reshape(1, H, W) - .expand(3, H, W) - .detach() - .cpu(), - autoscale=True, - ).save(f"vis/logits_B_{b}.png") - tensor_to_pil( - keypoint_logits_A_to_B[b] - .reshape(1, H, W) - .expand(3, H, W) - .detach() - .cpu(), - autoscale=True, - ).save(f"vis/logits_A_to_B{b}.png") - tensor_to_pil( - keypoint_logits_A[b] - .softmax(dim=-1) - .reshape(1, H, W) - .expand(3, H, W) - .detach() - .cpu(), - autoscale=True, - ).save(f"vis/keypoint_p_A_{b}.png") - tensor_to_pil( - keypoint_logits_B[b] - .softmax(dim=-1) - .reshape(1, H, W) - .expand(3, H, W) - .detach() - .cpu(), - autoscale=True, - ).save(f"vis/keypoint_p_B_{b}.png") - tensor_to_pil( - has_depth[b].reshape(1, H, W).expand(3, H, W).detach().cpu(), - autoscale=True, - ).save(f"vis/has_depth_A_{b}.png") - tensor_to_pil( - valid_mask_A_to_B[b] - .reshape(1, H, W) - .expand(3, H, W) - .detach() - .cpu(), - autoscale=True, - ).save(f"vis/valid_mask_A_to_B_{b}.png") - tensor_to_pil(batch["im_A"][b], unnormalize=True).save( - f"vis/im_A_{b}.jpg" - ) - tensor_to_pil(batch["im_B"][b], unnormalize=True).save( - f"vis/im_B_{b}.jpg" - ) - plt.close() - tot_loss = ( - joint_log_likelihood_loss + self.matchability_weight * matchability_loss - ) # - # tot_loss = tot_loss + (-2*consistency_loss).detach().exp()*compression_loss - if torch.rand(1) > 1: - print( - f"Precent Inlier: {self.tracked_metrics.get('mega_percent_inliers', 0)}" - ) - print(f"{joint_log_likelihood_loss=} {matchability_loss=}") - print(f"Total Loss: {tot_loss.item()}") - return tot_loss - - def forward(self, outputs, batch): - - if not isinstance(outputs, list): - outputs = [outputs] - losses = 0 - for output in outputs: - if "Homog_A_to_B" in batch: - losses = losses + self.self_supervised_loss(output, batch) - else: - losses = losses + self.supervised_loss(output, batch) - return losses diff --git a/spaces/Reeve/Ohayou_Face/models/stylegan2/model.py b/spaces/Reeve/Ohayou_Face/models/stylegan2/model.py deleted file mode 100644 index bf88e51cc1d74b0aeb9337b381b37a3f5d54044a..0000000000000000000000000000000000000000 --- a/spaces/Reeve/Ohayou_Face/models/stylegan2/model.py +++ /dev/null @@ -1,674 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F - -from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - return_features=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - if return_latents: - return latent - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_features: - return image, out - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out diff --git a/spaces/Reself/StableVideo/ldm/modules/image_degradation/utils_image.py b/spaces/Reself/StableVideo/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/Reself/StableVideo/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/Ricecake123/RVC-demo/docs/faq_en.md b/spaces/Ricecake123/RVC-demo/docs/faq_en.md deleted file mode 100644 index 05f03ec0467706c319c0c19c83c200f43eb8f4a0..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/docs/faq_en.md +++ /dev/null @@ -1,95 +0,0 @@ -## Q1:ffmpeg error/utf8 error. -It is most likely not a FFmpeg issue, but rather an audio path issue; - -FFmpeg may encounter an error when reading paths containing special characters like spaces and (), which may cause an FFmpeg error; and when the training set's audio contains Chinese paths, writing it into filelist.txt may cause a utf8 error.
- -## Q2:Cannot find index file after "One-click Training". -If it displays "Training is done. The program is closed," then the model has been trained successfully, and the subsequent errors are fake; - -The lack of an 'added' index file after One-click training may be due to the training set being too large, causing the addition of the index to get stuck; this has been resolved by using batch processing to add the index, which solves the problem of memory overload when adding the index. As a temporary solution, try clicking the "Train Index" button again.
- -## Q3:Cannot find the model in “Inferencing timbre” after training -Click “Refresh timbre list” and check again; if still not visible, check if there are any errors during training and send screenshots of the console, web UI, and logs/experiment_name/*.log to the developers for further analysis.
- -## Q4:How to share a model/How to use others' models? -The pth files stored in rvc_root/logs/experiment_name are not meant for sharing or inference, but for storing the experiment checkpoits for reproducibility and further training. The model to be shared should be the 60+MB pth file in the weights folder; - -In the future, weights/exp_name.pth and logs/exp_name/added_xxx.index will be merged into a single weights/exp_name.zip file to eliminate the need for manual index input; so share the zip file, not the pth file, unless you want to continue training on a different machine; - -Copying/sharing the several hundred MB pth files from the logs folder to the weights folder for forced inference may result in errors such as missing f0, tgt_sr, or other keys. You need to use the ckpt tab at the bottom to manually or automatically (if the information is found in the logs/exp_name), select whether to include pitch infomation and target audio sampling rate options and then extract the smaller model. After extraction, there will be a 60+ MB pth file in the weights folder, and you can refresh the voices to use it.
- -## Q5:Connection Error. -You may have closed the console (black command line window).
- -## Q6:WebUI popup 'Expecting value: line 1 column 1 (char 0)'. -Please disable system LAN proxy/global proxy and then refresh.
- -## Q7:How to train and infer without the WebUI? -Training script:
-You can run training in WebUI first, and the command-line versions of dataset preprocessing and training will be displayed in the message window.
- -Inference script:
-https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/myinfer.py
- - -e.g.
- -runtime\python.exe myinfer.py 0 "E:\codes\py39\RVC-beta\todo-songs\1111.wav" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "test.wav" "weights/mi-test.pth" 0.6 cuda:0 True
- - -f0up_key=sys.argv[1]
-input_path=sys.argv[2]
-index_path=sys.argv[3]
-f0method=sys.argv[4]#harvest or pm
-opt_path=sys.argv[5]
-model_path=sys.argv[6]
-index_rate=float(sys.argv[7])
-device=sys.argv[8]
-is_half=bool(sys.argv[9])
- -## Q8:Cuda error/Cuda out of memory. -There is a small chance that there is a problem with the CUDA configuration or the device is not supported; more likely, there is not enough memory (out of memory).
- -For training, reduce the batch size (if reducing to 1 is still not enough, you may need to change the graphics card); for inference, adjust the x_pad, x_query, x_center, and x_max settings in the config.py file as needed. 4G or lower memory cards (e.g. 1060(3G) and various 2G cards) can be abandoned, while 4G memory cards still have a chance.
- -## Q9:How many total_epoch are optimal? -If the training dataset's audio quality is poor and the noise floor is high, 20-30 epochs are sufficient. Setting it too high won't improve the audio quality of your low-quality training set.
- -If the training set audio quality is high, the noise floor is low, and there is sufficient duration, you can increase it. 200 is acceptable (since training is fast, and if you're able to prepare a high-quality training set, your GPU likely can handle a longer training duration without issue).
- -## Q10:How much training set duration is needed? - -A dataset of around 10min to 50min is recommended.
- -With guaranteed high sound quality and low bottom noise, more can be added if the dataset's timbre is uniform.
- -For a high-level training set (lean + distinctive tone), 5min to 10min is fine.
- -There are some people who have trained successfully with 1min to 2min data, but the success is not reproducible by others and is not very informative.
This requires that the training set has a very distinctive timbre (e.g. a high-frequency airy anime girl sound) and the quality of the audio is high; -Data of less than 1min duration has not been successfully attempted so far. This is not recommended.
- - -## Q11:What is the index rate for and how to adjust it? -If the tone quality of the pre-trained model and inference source is higher than that of the training set, they can bring up the tone quality of the inference result, but at the cost of a possible tone bias towards the tone of the underlying model/inference source rather than the tone of the training set, which is generally referred to as "tone leakage".
- -The index rate is used to reduce/resolve the timbre leakage problem. If the index rate is set to 1, theoretically there is no timbre leakage from the inference source and the timbre quality is more biased towards the training set. If the training set has a lower sound quality than the inference source, then a higher index rate may reduce the sound quality. Turning it down to 0 does not have the effect of using retrieval blending to protect the training set tones.
- -If the training set has good audio quality and long duration, turn up the total_epoch, when the model itself is less likely to refer to the inferred source and the pretrained underlying model, and there is little "tone leakage", the index_rate is not important and you can even not create/share the index file.
- -## Q12:How to choose the gpu when inferring? -In the config.py file, select the card number after "device cuda:".
- -The mapping between card number and graphics card can be seen in the graphics card information section of the training tab.
- -## Q13:How to use the model saved in the middle of training? -Save via model extraction at the bottom of the ckpt processing tab. - -## Q14:File/memory error(when training)? -Too many processes and your memory is not enough. You may fix it by: - -1、decrease the input in field "Threads of CPU". - -2、pre-cut trainset to shorter audio files. - - - diff --git a/spaces/Riksarkivet/htr_demo/helper/text/overview/htrflow/htrflow_tab4.md b/spaces/Riksarkivet/htr_demo/helper/text/overview/htrflow/htrflow_tab4.md deleted file mode 100644 index 89923a5e4c23b25546cc1dd23d813f9a1870acd6..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/helper/text/overview/htrflow/htrflow_tab4.md +++ /dev/null @@ -1,7 +0,0 @@ -### Text Recognition - -The text-recognition model was trained on approximately one million handwritten text-line images ranging from the 17th to the 19th century. See the model card for detailed evaluation results, and results from some fine-tuning experiments. - -
-HTR_tool -
diff --git a/spaces/Robert001/UniControl-Demo/annotator/mlsd/models/mbv2_mlsd_tiny.py b/spaces/Robert001/UniControl-Demo/annotator/mlsd/models/mbv2_mlsd_tiny.py deleted file mode 100644 index a7b90c3f97fe1475461294e820263353b0008007..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/mlsd/models/mbv2_mlsd_tiny.py +++ /dev/null @@ -1,285 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -import os -import sys -import torch -import torch.nn as nn -import torch.utils.model_zoo as model_zoo -from torch.nn import functional as F - - -class BlockTypeA(nn.Module): - def __init__(self, in_c1, in_c2, out_c1, out_c2, upscale = True): - super(BlockTypeA, self).__init__() - self.conv1 = nn.Sequential( - nn.Conv2d(in_c2, out_c2, kernel_size=1), - nn.BatchNorm2d(out_c2), - nn.ReLU(inplace=True) - ) - self.conv2 = nn.Sequential( - nn.Conv2d(in_c1, out_c1, kernel_size=1), - nn.BatchNorm2d(out_c1), - nn.ReLU(inplace=True) - ) - self.upscale = upscale - - def forward(self, a, b): - b = self.conv1(b) - a = self.conv2(a) - b = F.interpolate(b, scale_factor=2.0, mode='bilinear', align_corners=True) - return torch.cat((a, b), dim=1) - - -class BlockTypeB(nn.Module): - def __init__(self, in_c, out_c): - super(BlockTypeB, self).__init__() - self.conv1 = nn.Sequential( - nn.Conv2d(in_c, in_c, kernel_size=3, padding=1), - nn.BatchNorm2d(in_c), - nn.ReLU() - ) - self.conv2 = nn.Sequential( - nn.Conv2d(in_c, out_c, kernel_size=3, padding=1), - nn.BatchNorm2d(out_c), - nn.ReLU() - ) - - def forward(self, x): - x = self.conv1(x) + x - x = self.conv2(x) - return x - -class BlockTypeC(nn.Module): - def __init__(self, in_c, out_c): - super(BlockTypeC, self).__init__() - self.conv1 = nn.Sequential( - nn.Conv2d(in_c, in_c, kernel_size=3, padding=5, dilation=5), - nn.BatchNorm2d(in_c), - nn.ReLU() - ) - self.conv2 = nn.Sequential( - nn.Conv2d(in_c, in_c, kernel_size=3, padding=1), - nn.BatchNorm2d(in_c), - nn.ReLU() - ) - self.conv3 = nn.Conv2d(in_c, out_c, kernel_size=1) - - def forward(self, x): - x = self.conv1(x) - x = self.conv2(x) - x = self.conv3(x) - return x - -def _make_divisible(v, divisor, min_value=None): - """ - This function is taken from the original tf repo. - It ensures that all layers have a channel number that is divisible by 8 - It can be seen here: - https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py - :param v: - :param divisor: - :param min_value: - :return: - """ - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - - -class ConvBNReLU(nn.Sequential): - def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1): - self.channel_pad = out_planes - in_planes - self.stride = stride - #padding = (kernel_size - 1) // 2 - - # TFLite uses slightly different padding than PyTorch - if stride == 2: - padding = 0 - else: - padding = (kernel_size - 1) // 2 - - super(ConvBNReLU, self).__init__( - nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False), - nn.BatchNorm2d(out_planes), - nn.ReLU6(inplace=True) - ) - self.max_pool = nn.MaxPool2d(kernel_size=stride, stride=stride) - - - def forward(self, x): - # TFLite uses different padding - if self.stride == 2: - x = F.pad(x, (0, 1, 0, 1), "constant", 0) - #print(x.shape) - - for module in self: - if not isinstance(module, nn.MaxPool2d): - x = module(x) - return x - - -class InvertedResidual(nn.Module): - def __init__(self, inp, oup, stride, expand_ratio): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2] - - hidden_dim = int(round(inp * expand_ratio)) - self.use_res_connect = self.stride == 1 and inp == oup - - layers = [] - if expand_ratio != 1: - # pw - layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1)) - layers.extend([ - # dw - ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - ]) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - -class MobileNetV2(nn.Module): - def __init__(self, pretrained=True): - """ - MobileNet V2 main class - Args: - num_classes (int): Number of classes - width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount - inverted_residual_setting: Network structure - round_nearest (int): Round the number of channels in each layer to be a multiple of this number - Set to 1 to turn off rounding - block: Module specifying inverted residual building block for mobilenet - """ - super(MobileNetV2, self).__init__() - - block = InvertedResidual - input_channel = 32 - last_channel = 1280 - width_mult = 1.0 - round_nearest = 8 - - inverted_residual_setting = [ - # t, c, n, s - [1, 16, 1, 1], - [6, 24, 2, 2], - [6, 32, 3, 2], - [6, 64, 4, 2], - #[6, 96, 3, 1], - #[6, 160, 3, 2], - #[6, 320, 1, 1], - ] - - # only check the first element, assuming user knows t,c,n,s are required - if len(inverted_residual_setting) == 0 or len(inverted_residual_setting[0]) != 4: - raise ValueError("inverted_residual_setting should be non-empty " - "or a 4-element list, got {}".format(inverted_residual_setting)) - - # building first layer - input_channel = _make_divisible(input_channel * width_mult, round_nearest) - self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest) - features = [ConvBNReLU(4, input_channel, stride=2)] - # building inverted residual blocks - for t, c, n, s in inverted_residual_setting: - output_channel = _make_divisible(c * width_mult, round_nearest) - for i in range(n): - stride = s if i == 0 else 1 - features.append(block(input_channel, output_channel, stride, expand_ratio=t)) - input_channel = output_channel - self.features = nn.Sequential(*features) - - self.fpn_selected = [3, 6, 10] - # weight initialization - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out') - if m.bias is not None: - nn.init.zeros_(m.bias) - elif isinstance(m, nn.BatchNorm2d): - nn.init.ones_(m.weight) - nn.init.zeros_(m.bias) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - nn.init.zeros_(m.bias) - - #if pretrained: - # self._load_pretrained_model() - - def _forward_impl(self, x): - # This exists since TorchScript doesn't support inheritance, so the superclass method - # (this one) needs to have a name other than `forward` that can be accessed in a subclass - fpn_features = [] - for i, f in enumerate(self.features): - if i > self.fpn_selected[-1]: - break - x = f(x) - if i in self.fpn_selected: - fpn_features.append(x) - - c2, c3, c4 = fpn_features - return c2, c3, c4 - - - def forward(self, x): - return self._forward_impl(x) - - def _load_pretrained_model(self): - pretrain_dict = model_zoo.load_url('https://download.pytorch.org/models/mobilenet_v2-b0353104.pth') - model_dict = {} - state_dict = self.state_dict() - for k, v in pretrain_dict.items(): - if k in state_dict: - model_dict[k] = v - state_dict.update(model_dict) - self.load_state_dict(state_dict) - - -class MobileV2_MLSD_Tiny(nn.Module): - def __init__(self): - super(MobileV2_MLSD_Tiny, self).__init__() - - self.backbone = MobileNetV2(pretrained=True) - - self.block12 = BlockTypeA(in_c1= 32, in_c2= 64, - out_c1= 64, out_c2=64) - self.block13 = BlockTypeB(128, 64) - - self.block14 = BlockTypeA(in_c1 = 24, in_c2 = 64, - out_c1= 32, out_c2= 32) - self.block15 = BlockTypeB(64, 64) - - self.block16 = BlockTypeC(64, 16) - - def forward(self, x): - c2, c3, c4 = self.backbone(x) - - x = self.block12(c3, c4) - x = self.block13(x) - x = self.block14(c2, x) - x = self.block15(x) - x = self.block16(x) - x = x[:, 7:, :, :] - #print(x.shape) - x = F.interpolate(x, scale_factor=2.0, mode='bilinear', align_corners=True) - - return x \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/__init__.py deleted file mode 100644 index 52e4b48d383a84a055dcd7f6236f6e8e58eab924..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/__init__.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_module import BaseModule, ModuleList, Sequential -from .base_runner import BaseRunner -from .builder import RUNNERS, build_runner -from .checkpoint import (CheckpointLoader, _load_checkpoint, - _load_checkpoint_with_prefix, load_checkpoint, - load_state_dict, save_checkpoint, weights_to_cpu) -from .default_constructor import DefaultRunnerConstructor -from .dist_utils import (allreduce_grads, allreduce_params, get_dist_info, - init_dist, master_only) -from .epoch_based_runner import EpochBasedRunner, Runner -from .fp16_utils import LossScaler, auto_fp16, force_fp32, wrap_fp16_model -from .hooks import (HOOKS, CheckpointHook, ClosureHook, DistEvalHook, - DistSamplerSeedHook, DvcliveLoggerHook, EMAHook, EvalHook, - Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, Hook, IterTimerHook, - LoggerHook, LrUpdaterHook, MlflowLoggerHook, - NeptuneLoggerHook, OptimizerHook, PaviLoggerHook, - SyncBuffersHook, TensorboardLoggerHook, TextLoggerHook, - WandbLoggerHook) -from .iter_based_runner import IterBasedRunner, IterLoader -from .log_buffer import LogBuffer -from .optimizer import (OPTIMIZER_BUILDERS, OPTIMIZERS, - DefaultOptimizerConstructor, build_optimizer, - build_optimizer_constructor) -from .priority import Priority, get_priority -from .utils import get_host_info, get_time_str, obj_from_dict, set_random_seed - -__all__ = [ - 'BaseRunner', 'Runner', 'EpochBasedRunner', 'IterBasedRunner', 'LogBuffer', - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'IterTimerHook', 'DistSamplerSeedHook', 'LoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'MlflowLoggerHook', - 'DvcliveLoggerHook', '_load_checkpoint', 'load_state_dict', - 'load_checkpoint', 'weights_to_cpu', 'save_checkpoint', 'Priority', - 'get_priority', 'get_host_info', 'get_time_str', 'obj_from_dict', - 'init_dist', 'get_dist_info', 'master_only', 'OPTIMIZER_BUILDERS', - 'OPTIMIZERS', 'DefaultOptimizerConstructor', 'build_optimizer', - 'build_optimizer_constructor', 'IterLoader', 'set_random_seed', - 'auto_fp16', 'force_fp32', 'wrap_fp16_model', 'Fp16OptimizerHook', - 'SyncBuffersHook', 'EMAHook', 'build_runner', 'RUNNERS', 'allreduce_grads', - 'allreduce_params', 'LossScaler', 'CheckpointLoader', 'BaseModule', - '_load_checkpoint_with_prefix', 'EvalHook', 'DistEvalHook', 'Sequential', - 'ModuleList', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook', 'DefaultRunnerConstructor' -] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/faster_rcnn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/faster_rcnn.py deleted file mode 100644 index 81bad0f43a48b1022c4cd996e26d6c90be93d4d0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/faster_rcnn.py +++ /dev/null @@ -1,24 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FasterRCNN(TwoStageDetector): - """Implementation of `Faster R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(FasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/atss_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/atss_head.py deleted file mode 100644 index ff55dfa1790ba270539fc9f623dbb2984fa1a99e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/atss_head.py +++ /dev/null @@ -1,689 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_sampler, - images_to_levels, multi_apply, multiclass_nms, - reduce_mean, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - -EPS = 1e-12 - - -@HEADS.register_module() -class ATSSHead(AnchorHead): - """Bridging the Gap Between Anchor-based and Anchor-free Detection via - Adaptive Training Sample Selection. - - ATSS head structure is similar with FCOS, however ATSS use anchor boxes - and assign label by Adaptive Training Sample Selection instead max-iou. - - https://arxiv.org/abs/1912.02424 - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(ATSSHead, self).__init__(num_classes, in_channels, **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.loss_centerness = build_loss(loss_centerness) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.atss_cls = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - 3, - padding=1) - self.atss_reg = nn.Conv2d( - self.feat_channels, self.num_anchors * 4, 3, padding=1) - self.atss_centerness = nn.Conv2d( - self.feat_channels, self.num_anchors * 1, 3, padding=1) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.anchor_generator.strides]) - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.atss_cls, std=0.01, bias=bias_cls) - normal_init(self.atss_reg, std=0.01) - normal_init(self.atss_centerness, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - centerness (Tensor): Centerness for a single scale level, the - channel number is (N, num_anchors * 1, H, W). - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.atss_cls(cls_feat) - # we just follow atss, not apply exp in bbox_pred - bbox_pred = scale(self.atss_reg(reg_feat)).float() - centerness = self.atss_centerness(reg_feat) - return cls_score, bbox_pred, centerness - - def loss_single(self, anchors, cls_score, bbox_pred, centerness, labels, - label_weights, bbox_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - num_total_samples (int): Number os positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, 1).reshape( - -1, self.cls_out_channels).contiguous() - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - centerness = centerness.permute(0, 2, 3, 1).reshape(-1) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # classification loss - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_centerness = centerness[pos_inds] - - centerness_targets = self.centerness_target( - pos_anchors, pos_bbox_targets) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchors, pos_bbox_pred) - pos_decode_bbox_targets = self.bbox_coder.decode( - pos_anchors, pos_bbox_targets) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=centerness_targets, - avg_factor=1.0) - - # centerness loss - loss_centerness = self.loss_centerness( - pos_centerness, - centerness_targets, - avg_factor=num_total_samples) - - else: - loss_bbox = bbox_pred.sum() * 0 - loss_centerness = centerness.sum() * 0 - centerness_targets = bbox_targets.new_tensor(0.) - - return loss_cls, loss_bbox, loss_centerness, centerness_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - centernesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - centernesses (list[Tensor]): Centerness for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, loss_centerness,\ - bbox_avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - centernesses, - labels_list, - label_weights_list, - bbox_targets_list, - num_total_samples=num_total_samples) - - bbox_avg_factor = sum(bbox_avg_factor) - bbox_avg_factor = reduce_mean(bbox_avg_factor).item() - if bbox_avg_factor < EPS: - bbox_avg_factor = 1 - losses_bbox = list(map(lambda x: x / bbox_avg_factor, losses_bbox)) - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_centerness=loss_centerness) - - def centerness_target(self, anchors, bbox_targets): - # only calculate pos centerness targets, otherwise there may be nan - gts = self.bbox_coder.decode(anchors, bbox_targets) - anchors_cx = (anchors[:, 2] + anchors[:, 0]) / 2 - anchors_cy = (anchors[:, 3] + anchors[:, 1]) / 2 - l_ = anchors_cx - gts[:, 0] - t_ = anchors_cy - gts[:, 1] - r_ = gts[:, 2] - anchors_cx - b_ = gts[:, 3] - anchors_cy - - left_right = torch.stack([l_, r_], dim=1) - top_bottom = torch.stack([t_, b_], dim=1) - centerness = torch.sqrt( - (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * - (top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])) - assert not torch.isnan(centerness).any() - return centerness - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - centernesses, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - centernesses (list[Tensor]): Centerness for each scale level with - shape (N, num_anchors * 1, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. Default: None. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - - cls_score_list = [cls_scores[i].detach() for i in range(num_levels)] - bbox_pred_list = [bbox_preds[i].detach() for i in range(num_levels)] - centerness_pred_list = [ - centernesses[i].detach() for i in range(num_levels) - ] - img_shapes = [ - img_metas[i]['img_shape'] for i in range(cls_scores[0].shape[0]) - ] - scale_factors = [ - img_metas[i]['scale_factor'] for i in range(cls_scores[0].shape[0]) - ] - result_list = self._get_bboxes(cls_score_list, bbox_pred_list, - centerness_pred_list, mlvl_anchors, - img_shapes, scale_factors, cfg, rescale, - with_nms) - return result_list - - def _get_bboxes(self, - cls_scores, - bbox_preds, - centernesses, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into labeled boxes. - - Args: - cls_scores (list[Tensor]): Box scores for a single scale level - with shape (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for a single - scale level with shape (N, num_anchors * 4, H, W). - centernesses (list[Tensor]): Centerness for a single scale level - with shape (N, num_anchors * 1, H, W). - mlvl_anchors (list[Tensor]): Box reference for a single scale level - with shape (num_total_anchors, 4). - img_shapes (list[tuple[int]]): Shape of the input image, - list[(height, width, 3)]. - scale_factors (list[ndarray]): Scale factor of the image arrange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - device = cls_scores[0].device - batch_size = cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_centerness = [] - for cls_score, bbox_pred, centerness, anchors in zip( - cls_scores, bbox_preds, centernesses, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(0, 2, 3, 1).reshape( - batch_size, -1, self.cls_out_channels).sigmoid() - centerness = centerness.permute(0, 2, 3, - 1).reshape(batch_size, - -1).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - - # Always keep topk op for dynamic input in onnx - if nms_pre_tensor > 0 and (torch.onnx.is_in_onnx_export() - or scores.shape[-2] > nms_pre_tensor): - from torch import _shape_as_tensor - # keep shape as tensor and get k - num_anchor = _shape_as_tensor(scores)[-2].to(device) - nms_pre = torch.where(nms_pre_tensor < num_anchor, - nms_pre_tensor, num_anchor) - - max_scores, _ = (scores * centerness[..., None]).max(-1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - centerness = centerness[batch_inds, topk_inds] - else: - anchors = anchors.expand_as(bbox_pred) - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_centerness.append(centerness) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - batch_mlvl_centerness = torch.cat(mlvl_centerness, dim=1) - - # Set max number of box to be feed into nms in deployment - deploy_nms_pre = cfg.get('deploy_nms_pre', -1) - if deploy_nms_pre > 0 and torch.onnx.is_in_onnx_export(): - batch_mlvl_scores, _ = ( - batch_mlvl_scores * - batch_mlvl_centerness.unsqueeze(2).expand_as(batch_mlvl_scores) - ).max(-1) - _, topk_inds = batch_mlvl_scores.topk(deploy_nms_pre) - batch_inds = torch.arange(batch_size).view(-1, - 1).expand_as(topk_inds) - batch_mlvl_scores = batch_mlvl_scores[batch_inds, topk_inds, :] - batch_mlvl_bboxes = batch_mlvl_bboxes[batch_inds, topk_inds, :] - batch_mlvl_centerness = batch_mlvl_centerness[batch_inds, - topk_inds] - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - - if with_nms: - det_results = [] - for (mlvl_bboxes, mlvl_scores, - mlvl_centerness) in zip(batch_mlvl_bboxes, batch_mlvl_scores, - batch_mlvl_centerness): - det_bbox, det_label = multiclass_nms( - mlvl_bboxes, - mlvl_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=mlvl_centerness) - det_results.append(tuple([det_bbox, det_label])) - else: - det_results = [ - tuple(mlvl_bs) - for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores, - batch_mlvl_centerness) - ] - return det_results - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for ATSS head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4) - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if hasattr(self, 'bbox_coder'): - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - # used in VFNetHead - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/io.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/io.py deleted file mode 100644 index 9879154227f640c262853b92c219461c6f67ee8e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/io.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from collections import OrderedDict - -import cv2 -from cv2 import (CAP_PROP_FOURCC, CAP_PROP_FPS, CAP_PROP_FRAME_COUNT, - CAP_PROP_FRAME_HEIGHT, CAP_PROP_FRAME_WIDTH, - CAP_PROP_POS_FRAMES, VideoWriter_fourcc) - -from annotator.uniformer.mmcv.utils import (check_file_exist, mkdir_or_exist, scandir, - track_progress) - - -class Cache: - - def __init__(self, capacity): - self._cache = OrderedDict() - self._capacity = int(capacity) - if capacity <= 0: - raise ValueError('capacity must be a positive integer') - - @property - def capacity(self): - return self._capacity - - @property - def size(self): - return len(self._cache) - - def put(self, key, val): - if key in self._cache: - return - if len(self._cache) >= self.capacity: - self._cache.popitem(last=False) - self._cache[key] = val - - def get(self, key, default=None): - val = self._cache[key] if key in self._cache else default - return val - - -class VideoReader: - """Video class with similar usage to a list object. - - This video warpper class provides convenient apis to access frames. - There exists an issue of OpenCV's VideoCapture class that jumping to a - certain frame may be inaccurate. It is fixed in this class by checking - the position after jumping each time. - Cache is used when decoding videos. So if the same frame is visited for - the second time, there is no need to decode again if it is stored in the - cache. - - :Example: - - >>> import annotator.uniformer.mmcv as mmcv - >>> v = mmcv.VideoReader('sample.mp4') - >>> len(v) # get the total frame number with `len()` - 120 - >>> for img in v: # v is iterable - >>> mmcv.imshow(img) - >>> v[5] # get the 6th frame - """ - - def __init__(self, filename, cache_capacity=10): - # Check whether the video path is a url - if not filename.startswith(('https://', 'http://')): - check_file_exist(filename, 'Video file not found: ' + filename) - self._vcap = cv2.VideoCapture(filename) - assert cache_capacity > 0 - self._cache = Cache(cache_capacity) - self._position = 0 - # get basic info - self._width = int(self._vcap.get(CAP_PROP_FRAME_WIDTH)) - self._height = int(self._vcap.get(CAP_PROP_FRAME_HEIGHT)) - self._fps = self._vcap.get(CAP_PROP_FPS) - self._frame_cnt = int(self._vcap.get(CAP_PROP_FRAME_COUNT)) - self._fourcc = self._vcap.get(CAP_PROP_FOURCC) - - @property - def vcap(self): - """:obj:`cv2.VideoCapture`: The raw VideoCapture object.""" - return self._vcap - - @property - def opened(self): - """bool: Indicate whether the video is opened.""" - return self._vcap.isOpened() - - @property - def width(self): - """int: Width of video frames.""" - return self._width - - @property - def height(self): - """int: Height of video frames.""" - return self._height - - @property - def resolution(self): - """tuple: Video resolution (width, height).""" - return (self._width, self._height) - - @property - def fps(self): - """float: FPS of the video.""" - return self._fps - - @property - def frame_cnt(self): - """int: Total frames of the video.""" - return self._frame_cnt - - @property - def fourcc(self): - """str: "Four character code" of the video.""" - return self._fourcc - - @property - def position(self): - """int: Current cursor position, indicating frame decoded.""" - return self._position - - def _get_real_position(self): - return int(round(self._vcap.get(CAP_PROP_POS_FRAMES))) - - def _set_real_position(self, frame_id): - self._vcap.set(CAP_PROP_POS_FRAMES, frame_id) - pos = self._get_real_position() - for _ in range(frame_id - pos): - self._vcap.read() - self._position = frame_id - - def read(self): - """Read the next frame. - - If the next frame have been decoded before and in the cache, then - return it directly, otherwise decode, cache and return it. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - # pos = self._position - if self._cache: - img = self._cache.get(self._position) - if img is not None: - ret = True - else: - if self._position != self._get_real_position(): - self._set_real_position(self._position) - ret, img = self._vcap.read() - if ret: - self._cache.put(self._position, img) - else: - ret, img = self._vcap.read() - if ret: - self._position += 1 - return img - - def get_frame(self, frame_id): - """Get frame by index. - - Args: - frame_id (int): Index of the expected frame, 0-based. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - if frame_id < 0 or frame_id >= self._frame_cnt: - raise IndexError( - f'"frame_id" must be between 0 and {self._frame_cnt - 1}') - if frame_id == self._position: - return self.read() - if self._cache: - img = self._cache.get(frame_id) - if img is not None: - self._position = frame_id + 1 - return img - self._set_real_position(frame_id) - ret, img = self._vcap.read() - if ret: - if self._cache: - self._cache.put(self._position, img) - self._position += 1 - return img - - def current_frame(self): - """Get the current frame (frame that is just visited). - - Returns: - ndarray or None: If the video is fresh, return None, otherwise - return the frame. - """ - if self._position == 0: - return None - return self._cache.get(self._position - 1) - - def cvt2frames(self, - frame_dir, - file_start=0, - filename_tmpl='{:06d}.jpg', - start=0, - max_num=0, - show_progress=True): - """Convert a video to frame images. - - Args: - frame_dir (str): Output directory to store all the frame images. - file_start (int): Filenames will start from the specified number. - filename_tmpl (str): Filename template with the index as the - placeholder. - start (int): The starting frame index. - max_num (int): Maximum number of frames to be written. - show_progress (bool): Whether to show a progress bar. - """ - mkdir_or_exist(frame_dir) - if max_num == 0: - task_num = self.frame_cnt - start - else: - task_num = min(self.frame_cnt - start, max_num) - if task_num <= 0: - raise ValueError('start must be less than total frame number') - if start > 0: - self._set_real_position(start) - - def write_frame(file_idx): - img = self.read() - if img is None: - return - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - cv2.imwrite(filename, img) - - if show_progress: - track_progress(write_frame, range(file_start, - file_start + task_num)) - else: - for i in range(task_num): - write_frame(file_start + i) - - def __len__(self): - return self.frame_cnt - - def __getitem__(self, index): - if isinstance(index, slice): - return [ - self.get_frame(i) - for i in range(*index.indices(self.frame_cnt)) - ] - # support negative indexing - if index < 0: - index += self.frame_cnt - if index < 0: - raise IndexError('index out of range') - return self.get_frame(index) - - def __iter__(self): - self._set_real_position(0) - return self - - def __next__(self): - img = self.read() - if img is not None: - return img - else: - raise StopIteration - - next = __next__ - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self._vcap.release() - - -def frames2video(frame_dir, - video_file, - fps=30, - fourcc='XVID', - filename_tmpl='{:06d}.jpg', - start=0, - end=0, - show_progress=True): - """Read the frame images from a directory and join them as a video. - - Args: - frame_dir (str): The directory containing video frames. - video_file (str): Output filename. - fps (float): FPS of the output video. - fourcc (str): Fourcc of the output video, this should be compatible - with the output file type. - filename_tmpl (str): Filename template with the index as the variable. - start (int): Starting frame index. - end (int): Ending frame index. - show_progress (bool): Whether to show a progress bar. - """ - if end == 0: - ext = filename_tmpl.split('.')[-1] - end = len([name for name in scandir(frame_dir, ext)]) - first_file = osp.join(frame_dir, filename_tmpl.format(start)) - check_file_exist(first_file, 'The start frame not found: ' + first_file) - img = cv2.imread(first_file) - height, width = img.shape[:2] - resolution = (width, height) - vwriter = cv2.VideoWriter(video_file, VideoWriter_fourcc(*fourcc), fps, - resolution) - - def write_frame(file_idx): - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - img = cv2.imread(filename) - vwriter.write(img) - - if show_progress: - track_progress(write_frame, range(start, end)) - else: - for i in range(start, end): - write_frame(i) - vwriter.release() diff --git a/spaces/Rominn/vits-uma-genshin-honkai/attentions.py b/spaces/Rominn/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Rominn/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Rongjiehuang/ProDiff/tasks/tts/pe.py b/spaces/Rongjiehuang/ProDiff/tasks/tts/pe.py deleted file mode 100644 index 3880c80d0820c36e044c00bd38a07fd3cce73323..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/tasks/tts/pe.py +++ /dev/null @@ -1,155 +0,0 @@ -import matplotlib -matplotlib.use('Agg') - -import torch -import numpy as np -import os - -from tasks.base_task import BaseDataset -from tasks.tts.fs2 import FastSpeech2Task -from modules.fastspeech.pe import PitchExtractor -import utils -from utils.indexed_datasets import IndexedDataset -from utils.hparams import hparams -from utils.plot import f0_to_figure -from utils.pitch_utils import norm_interp_f0, denorm_f0 - - -class PeDataset(BaseDataset): - def __init__(self, prefix, shuffle=False): - super().__init__(shuffle) - self.data_dir = hparams['binary_data_dir'] - self.prefix = prefix - self.hparams = hparams - self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy') - self.indexed_ds = None - - # pitch stats - f0_stats_fn = f'{self.data_dir}/train_f0s_mean_std.npy' - if os.path.exists(f0_stats_fn): - hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = np.load(f0_stats_fn) - hparams['f0_mean'] = float(hparams['f0_mean']) - hparams['f0_std'] = float(hparams['f0_std']) - else: - hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = None, None - - if prefix == 'test': - if hparams['num_test_samples'] > 0: - self.avail_idxs = list(range(hparams['num_test_samples'])) + hparams['test_ids'] - self.sizes = [self.sizes[i] for i in self.avail_idxs] - - def _get_item(self, index): - if hasattr(self, 'avail_idxs') and self.avail_idxs is not None: - index = self.avail_idxs[index] - if self.indexed_ds is None: - self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}') - return self.indexed_ds[index] - - def __getitem__(self, index): - hparams = self.hparams - item = self._get_item(index) - max_frames = hparams['max_frames'] - spec = torch.Tensor(item['mel'])[:max_frames] - # mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None - f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams) - pitch = torch.LongTensor(item.get("pitch"))[:max_frames] - # print(item.keys(), item['mel'].shape, spec.shape) - sample = { - "id": index, - "item_name": item['item_name'], - "text": item['txt'], - "mel": spec, - "pitch": pitch, - "f0": f0, - "uv": uv, - # "mel2ph": mel2ph, - # "mel_nonpadding": spec.abs().sum(-1) > 0, - } - return sample - - def collater(self, samples): - if len(samples) == 0: - return {} - id = torch.LongTensor([s['id'] for s in samples]) - item_names = [s['item_name'] for s in samples] - text = [s['text'] for s in samples] - f0 = utils.collate_1d([s['f0'] for s in samples], 0.0) - pitch = utils.collate_1d([s['pitch'] for s in samples]) - uv = utils.collate_1d([s['uv'] for s in samples]) - mels = utils.collate_2d([s['mel'] for s in samples], 0.0) - mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples]) - # mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \ - # if samples[0]['mel2ph'] is not None else None - # mel_nonpaddings = utils.collate_1d([s['mel_nonpadding'].float() for s in samples], 0.0) - - batch = { - 'id': id, - 'item_name': item_names, - 'nsamples': len(samples), - 'text': text, - 'mels': mels, - 'mel_lengths': mel_lengths, - 'pitch': pitch, - # 'mel2ph': mel2ph, - # 'mel_nonpaddings': mel_nonpaddings, - 'f0': f0, - 'uv': uv, - } - return batch - - -class PitchExtractionTask(FastSpeech2Task): - def __init__(self): - super().__init__() - self.dataset_cls = PeDataset - - def build_tts_model(self): - self.model = PitchExtractor(conv_layers=hparams['pitch_extractor_conv_layers']) - - # def build_scheduler(self, optimizer): - # return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5) - def _training_step(self, sample, batch_idx, _): - loss_output = self.run_model(self.model, sample) - total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad]) - loss_output['batch_size'] = sample['mels'].size()[0] - return total_loss, loss_output - - def validation_step(self, sample, batch_idx): - outputs = {} - outputs['losses'] = {} - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=True) - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = utils.tensors_to_scalars(outputs) - if batch_idx < hparams['num_valid_plots']: - self.plot_pitch(batch_idx, model_out, sample) - return outputs - - def run_model(self, model, sample, return_output=False, infer=False): - f0 = sample['f0'] - uv = sample['uv'] - output = model(sample['mels']) - losses = {} - self.add_pitch_loss(output, sample, losses) - if not return_output: - return losses - else: - return losses, output - - def plot_pitch(self, batch_idx, model_out, sample): - gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams) - self.logger.experiment.add_figure( - f'f0_{batch_idx}', - f0_to_figure(gt_f0[0], None, model_out['f0_denorm_pred'][0]), - self.global_step) - - def add_pitch_loss(self, output, sample, losses): - # mel2ph = sample['mel2ph'] # [B, T_s] - mel = sample['mels'] - f0 = sample['f0'] - uv = sample['uv'] - # nonpadding = (mel2ph != 0).float() if hparams['pitch_type'] == 'frame' \ - # else (sample['txt_tokens'] != 0).float() - nonpadding = (mel.abs().sum(-1) > 0).float() # sample['mel_nonpaddings'] - # print(nonpadding[0][-8:], nonpadding.shape) - self.add_f0_loss(output['pitch_pred'], f0, uv, losses, nonpadding=nonpadding) \ No newline at end of file diff --git a/spaces/SUPERSHANKY/Finetuned_Diffusion_Max/style.css b/spaces/SUPERSHANKY/Finetuned_Diffusion_Max/style.css deleted file mode 100644 index 9bfa78cc983f84693cf7cbab1e3bfd0e0d36c944..0000000000000000000000000000000000000000 --- a/spaces/SUPERSHANKY/Finetuned_Diffusion_Max/style.css +++ /dev/null @@ -1,24 +0,0 @@ -.finetuned-diffusion-div div{ - display:inline-flex; - align-items:center; - gap:.8rem; - font-size:1.75rem -} -.finetuned-diffusion-div div h1{ - font-weight:900; - margin-bottom:7px -} -.finetuned-diffusion-div p{ - margin-bottom:10px; - font-size:94% -} -a{ - text-decoration:underline -} -.tabs{ - margin-top:0; - margin-bottom:0 -} -#gallery{ - min-height:20rem -} diff --git a/spaces/Sakil/A_cover_letter_generator_for_jobs/app.py b/spaces/Sakil/A_cover_letter_generator_for_jobs/app.py deleted file mode 100644 index 633cc282bd54314712d90b7995659916e9e119e2..0000000000000000000000000000000000000000 --- a/spaces/Sakil/A_cover_letter_generator_for_jobs/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import gradio as gr -import torch -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline - -model_name = "nouamanetazi/cover-letter-t5-base" -tokenizer = AutoTokenizer.from_pretrained(model_name) -model = AutoModelForSeq2SeqLM.from_pretrained(model_name) - - -def generate_cover_letter( - name, job, company, background, experiences, max_length=300, temperature=1.0, top_p=0.9, max_time=10 -): - model_args = { - "max_length": max_length, - "temperature": temperature, - "top_p": top_p, - # "top_k": 120, - "early_stopping": True, - "max_time": max_time, - "do_sample": True, # do_sample=False to force deterministic output - "num_return_sequences": 1, # number of samples to return - "min_length": 100, - "num_beams": 4, - # "num_beam_groups": 1, - # "diversity_penalty": 0, - # "repetition_penalty": 5.0, - # "length_penalty": 0, - # "remove_invalid_values": True, - "no_repeat_ngram_size": 3, - } - # Load the tokenizer and the distilgpt2 model - # Set up the transformers pipeline - text_generator = pipeline( - "text2text-generation", model=model, tokenizer=tokenizer, device=0 if torch.cuda.is_available() else -1 - ) - # Generate the text - prompt = f"coverletter name: {name} job: {job} at {company} background: {background} experiences: {experiences}" - generated_text = text_generator(prompt, **model_args)[0]["generated_text"] - return generated_text - - -title = "A Cover Letter Generator for Jobs" -description = "" -article = '
This is a Space App for the Cover Letter
' -examples = None -interface = gr.Interface( - fn=generate_cover_letter, - inputs=[ - gr.inputs.Textbox( - label="Your name", - default="Sakil Ansari", - ), - gr.inputs.Textbox( - label="The job you want to apply for", - default="Data Scientist", - ), - gr.inputs.Textbox( - label="The company you want to apply for", - default="Google", - ), - gr.inputs.Textbox( - lines=2, - label="Your education/background", - default="Master of Technology in Machine learning", - ), - gr.inputs.Textbox( - lines=3, - label="Your skills/previous experiences", - default="I am the Author of Book and MTech in Machine Learning and achievement-driven professional with an experience of 3+ years in Data Science/Machine Learning/NLP/ Deep Learning/Data analytics. I am highly skilled in libraries like Sklearn, Numpy, Pandas, Matplotlib, Seaborn, Tensorflow, Faster-RCNN, Keras, Pytorch, FastAI, PowerBI/Tableau for Data Visualization, SQL/Oracle/NoSQL for databases and experience in NLP use cases related to named entity recognition, text summarization, text similarity, text generation.", - ), - gr.inputs.Slider(20, 2048, default=400, label="Max Length"), - gr.inputs.Slider(0, 3, default=1.2, label="Temperature"), - gr.inputs.Slider(0, 1, default=0.9, label="Top P"), - gr.inputs.Slider(1, 200, default=20, label="Max time"), - ], - outputs=[gr.outputs.Textbox(type="str", label="Cover Letter")], - title=title, - description=description, - examples=examples, - article=article, - layout="horizontal", -) -interface.launch(inline=False, debug=False) diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/transforms/__init__.py b/spaces/SankarSrin/image-matting-app/ppmatting/transforms/__init__.py deleted file mode 100644 index 7986cdd642998fb0638a81c9ea22615faf8bad0b..0000000000000000000000000000000000000000 --- a/spaces/SankarSrin/image-matting-app/ppmatting/transforms/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .transforms import * diff --git a/spaces/SeyedAli/Image-Similarity/src/similarity/model_implements/mobilenet_v3.py b/spaces/SeyedAli/Image-Similarity/src/similarity/model_implements/mobilenet_v3.py deleted file mode 100644 index 53b6770039bb4c077691aa578352d35427869119..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Image-Similarity/src/similarity/model_implements/mobilenet_v3.py +++ /dev/null @@ -1,14 +0,0 @@ -import tensorflow_hub as hub -import numpy as np - -class ModelnetV3(): - def __init__(self): - module_handle = "https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/feature_vector/5" - self.module = hub.load(module_handle) - - def extract_feature(self, imgs): - print('getting with ModelnetV3...') - features = [] - for img in imgs: - features.append(np.squeeze(self.module(img))) - return features \ No newline at end of file diff --git a/spaces/SidKarthik/multi_doc_retrieval_agent/htmlTemplates.py b/spaces/SidKarthik/multi_doc_retrieval_agent/htmlTemplates.py deleted file mode 100644 index 9ce8cc07233cf7bfe06e864123c865d3c7e4145c..0000000000000000000000000000000000000000 --- a/spaces/SidKarthik/multi_doc_retrieval_agent/htmlTemplates.py +++ /dev/null @@ -1,45 +0,0 @@ -css = ''' - -# """, -# unsafe_allow_html=True, -# ) - - -# _max_width_() - -# c30, c31, c32 = st.columns([2.5, 1, 3]) - -# with c30: -# # st.image("logo.png", width=400) -st.title("👨‍⚕️ PhenoTagger_v1.2 Demo") - -with st.expander("ℹ️ - About this app", expanded=True): - - st.write( - """ -- This app is an easy-to-use interface built in Streamlit for [PhenoTagger](https://github.com/ncbi-nlp/PhenoTagger) library! -- PhenoTagger is a hybrid method that combines dictionary and deep learning-based methods to recognize Human Phenotype Ontology (HPO) concepts in unstructured biomedical text. Please refer to [our paper](https://doi.org/10.1093/bioinformatics/btab019) for more details. -- Contact: [NLM/NCBI BioNLP Research Group](https://www.ncbi.nlm.nih.gov/research/bionlp/) - """ - ) - - st.markdown("") - -st.markdown("") -st.markdown("## 📌 Paste document ") -with st.form(key="my_form"): - - - ce, c1, ce, c2, c3 = st.columns([0.07, 1, 0.07, 4, 0.07]) - with c1: - ModelType = st.radio( - "Choose your Ontology", - ["HPO(Default)", "UBERON"], - #help="Bioformer is more precise, CNN is more efficient", - ) - - if ModelType == "HPO(Default)": - # kw_model = KeyBERT(model=roberta) - - #@st.cache(allow_output_mutation=True) - def load_model(): - ontfiles={'dic_file':'./dict_hpo/noabb_lemma.dic', - 'word_id_file':'./dict_hpo/word_id_map.json', - 'id_word_file':'./dict_hpo/id_word_map.json'} - - - vocabfiles={'labelfile':'./dict_hpo/lable.vocab', - 'checkpoint_path':'./models_v1.2/bioformer-cased-v1.0/', - 'lowercase':False} - modelfile='./models_v1.2/bioformer-HPO.h5' - - - biotag_dic=dic_ont(ontfiles) - - nn_model=bioTag_BERT(vocabfiles) - nn_model.load_model(modelfile) - return nn_model,biotag_dic - - nn_model,biotag_dic = load_model() - - else: - #@st.cache(allow_output_mutation=True) - def load_model(): - ontfiles={'dic_file':'./dict_uberon/noabb_lemma.dic', - 'word_id_file':'./dict_uberon/word_id_map.json', - 'id_word_file':'./dict_uberon/id_word_map.json'} - - vocabfiles={'labelfile':'./dict_uberon/lable.vocab', - 'checkpoint_path':'./models_v1.2/bioformer-cased-v1.0/', - 'lowercase':False} - - modelfile='./models_v1.2/bioformer-UBERON.h5' - - biotag_dic=dic_ont(ontfiles) - - nn_model=bioTag_BERT(vocabfiles) - nn_model.load_model(modelfile) - - return nn_model,biotag_dic - - nn_model,biotag_dic = load_model() - - para_overlap = st.checkbox( - "Overlap concept", - value=False, - help="Tick this box to identify overlapping concepts", - ) - para_abbr = st.checkbox( - "Abbreviaitons", - value=True, - help="Tick this box to identify abbreviations", - ) - - para_threshold = st.slider( - "Threshold", - min_value=0.5, - max_value=1.0, - value=0.95, - step=0.05, - help="Retrun the preditions which socre over the threshold.", - ) - - - - - with c2: - - - doc = st.text_area( - "Paste your text below", - value = 'The clinical features of Angelman syndrome (AS) comprise severe mental retardation, postnatal microcephaly, macrostomia and prognathia, absence of speech, ataxia, and a happy disposition. We report on seven patients who lack most of these features, but presented with obesity, muscular hypotonia and mild mental retardation. Based on the latter findings, the patients were initially suspected of having Prader-Willi syndrome. DNA methylation analysis of SNRPN and D15S63, however, revealed an AS pattern, ie the maternal band was faint or absent. Cytogenetic studies and microsatellite analysis demonstrated apparently normal chromosomes 15 of biparental inheritance. We conclude that these patients have an imprinting defect and a previously unrecognised form of AS. The mild phenotype may be explained by an incomplete imprinting defect or by cellular mosaicism.', - height=400, - ) - - - - - # MAX_WORDS = 500 - # import re - # res = len(re.findall(r"\w+", doc)) - # if res > MAX_WORDS: - # st.warning( - # "⚠️ Your text contains " - # + str(res) - # + " words." - # + " Only the first 500 words will be reviewed. Stay tuned as increased allowance is coming! 😊" - # ) - - # doc = doc[:MAX_WORDS] - - submit_button = st.form_submit_button(label="✨ Submit!") - - -if not submit_button: - st.stop() - -#st.write(para_overlap,para_abbr,para_threshold) -para_set={ - #model_type':para_model, # cnn or bioformer - 'onlyLongest': not para_overlap, # False: return overlap concepts, True only longgest - 'abbrRecog':para_abbr,# False: don't identify abbr, True: identify abbr - 'ML_Threshold':para_threshold,# the Threshold of deep learning model - } -st.markdown("") -st.markdown("## 💡 Tagging results:") -with st.spinner('Wait for tagging...'): - tag_result=bioTag(doc,biotag_dic,nn_model,onlyLongest=para_set['onlyLongest'], abbrRecog=para_set['abbrRecog'],Threshold=para_set['ML_Threshold']) - -st.markdown('Move the mouse🖱️ over the entity to display the HPO id.', unsafe_allow_html=True) -# print('dic...........:',biotag_dic.keys()) -# st.write('parameters:', para_overlap,para_abbr,para_threshold) - -html_results='' -text_results=doc+'\n' -entity_end=0 -hpoid_count={} -if len(tag_result)>=0: - for ele in tag_result: - entity_start=int(ele[0]) - html_results+=doc[entity_end:entity_start] - entity_end=int(ele[1]) - entity_id=ele[2] - entity_score=ele[3] - text_results+=ele[0]+'\t'+ele[1]+'\t'+doc[entity_start:entity_end]+'\t'+ele[2]+'\t'+format(float(ele[3]),'.2f')+'\n' - if entity_id not in hpoid_count.keys(): - hpoid_count[entity_id]=1 - else: - hpoid_count[entity_id]+=1 - - html_results+=''+doc[entity_start:entity_end]+'' - html_results+=doc[entity_end:] - -else: - html_results=doc - -st.markdown('
'+html_results+'
', unsafe_allow_html=True) - - -#table -data_entity=[] -for ele in hpoid_count.keys(): - segs=ele.split(';') - term_name='' - for seg in segs: - term_name+=biotag_dic.id_word[seg][0]+';' - temp=[ele,term_name,hpoid_count[ele]] #hpoid, term name, count - data_entity.append(temp) - - -st.markdown("") -st.markdown("") -# st.markdown("## Table output:") - -# cs, c1, c2, c3, cLast = st.columns([2, 1.5, 1.5, 1.5, 2]) - -# with c1: -# CSVButton2 = download_button(keywords, "Data.csv", "📥 Download (.csv)") -# with c2: -# CSVButton2 = download_button(keywords, "Data.txt", "📥 Download (.txt)") -# with c3: -# CSVButton2 = download_button(keywords, "Data.json", "📥 Download (.json)") - -# st.header("") - -df = ( - DataFrame(data_entity, columns=["Ontology_id", "Term name","Frequency"]) - .sort_values(by="Frequency", ascending=False) - .reset_index(drop=True) -) - -df.index += 1 - -c1, c2, c3 = st.columns([1, 4, 1]) - -# format_dictionary = { -# "Relevancy": "{:.1%}", -# } - -# df = df.format(format_dictionary) - -with c2: - st.table(df) - -c1, c2, c3 = st.columns([1, 1, 1]) -with c2: - st.download_button('Download annotations', text_results) - diff --git a/spaces/llamaindex/llama_index_vector_demo/app.py b/spaces/llamaindex/llama_index_vector_demo/app.py deleted file mode 100644 index e594bba9e6c9d48692f0c531460c6fe8143c928b..0000000000000000000000000000000000000000 --- a/spaces/llamaindex/llama_index_vector_demo/app.py +++ /dev/null @@ -1,77 +0,0 @@ -import os -import streamlit as st -from llama_index import ( - GPTVectorStoreIndex, - SimpleDirectoryReader, - ServiceContext, - StorageContext, - LLMPredictor, - load_index_from_storage, -) -from langchain.chat_models import ChatOpenAI - -index_name = "./saved_index" -documents_folder = "./documents" - - -@st.cache_resource -def initialize_index(index_name, documents_folder): - llm_predictor = LLMPredictor( - llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0) - ) - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor) - if os.path.exists(index_name): - index = load_index_from_storage( - StorageContext.from_defaults(persist_dir=index_name), - service_context=service_context, - ) - else: - documents = SimpleDirectoryReader(documents_folder).load_data() - index = GPTVectorStoreIndex.from_documents( - documents, service_context=service_context - ) - index.storage_context.persist(persist_dir=index_name) - - return index - - -@st.cache_data(max_entries=200, persist=True) -def query_index(_index, query_text): - if _index is None: - return "Please initialize the index!" - response = _index.as_query_engine().query(query_text) - return str(response) - - -st.title("🦙 Llama Index Demo 🦙") -st.header("Welcome to the Llama Index Streamlit Demo") -st.write( - "Enter a query about Paul Graham's essays. You can check out the original essay [here](https://raw.githubusercontent.com/jerryjliu/llama_index/main/examples/paul_graham_essay/data/paul_graham_essay.txt). Your query will be answered using the essay as context, using embeddings from text-ada-002 and LLM completions from gpt-3.5-turbo. You can read more about Llama Index and how this works in [our docs!](https://gpt-index.readthedocs.io/en/latest/index.html)" -) - -index = None -api_key = st.text_input("Enter your OpenAI API key here:", type="password") -if api_key: - os.environ["OPENAI_API_KEY"] = api_key - index = initialize_index(index_name, documents_folder) - - -if index is None: - st.warning("Please enter your api key first.") - -text = st.text_input("Query text:", value="What did the author do growing up?") - -if st.button("Run Query") and text is not None: - response = query_index(index, text) - st.markdown(response) - - llm_col, embed_col = st.columns(2) - with llm_col: - st.markdown( - f"LLM Tokens Used: {index.service_context.llm_predictor._last_token_usage}" - ) - - with embed_col: - st.markdown( - f"Embedding Tokens Used: {index.service_context.embed_model._last_token_usage}" - ) diff --git a/spaces/ltg/chat-nort5/app.py b/spaces/ltg/chat-nort5/app.py deleted file mode 100644 index afe1d42769c0df642069a585a1590e71249819b0..0000000000000000000000000000000000000000 --- a/spaces/ltg/chat-nort5/app.py +++ /dev/null @@ -1,592 +0,0 @@ -import torch -import transformers -from transformers import AutoTokenizer, TextIteratorStreamer -from transformers.generation import LogitsProcessor -from modeling_nort5_acd import NorT5ForConditionalGeneration -from threading import Thread -import gradio as gr -import json -import os -import re -import shutil -import ftfy - -import huggingface_hub -from huggingface_hub import Repository -from datetime import datetime - -from retrieval import Retrival - - -# SYNCHRONIZE WITH DATASETS -LOGS_REPO = "https://huggingface.co/datasets/ltg/chat-nort5-logs" -HF_TOKEN = os.environ.get("HF_TOKEN") -print("Is HF token none?", HF_TOKEN is None) - -dataset = Repository( - local_dir="data", clone_from=LOGS_REPO, use_auth_token=HF_TOKEN -) - -def system_info(temperature, top_p, top_k, repetition_penalty, acd, initial_prompt): - return { - "time": str(datetime.now()), - "temperature": temperature, - "top_p": top_p, - "top_k": top_k, - "repetition_penalty": repetition_penalty, - "acd": acd, - "initial_prompt": initial_prompt - } - -print(f"Starting to load the model to memory") - -tokenizer = AutoTokenizer.from_pretrained("chat_nort5_large") -cls_index = tokenizer.convert_tokens_to_ids("[CLS]") -sep_index = tokenizer.convert_tokens_to_ids("[SEP]") -user_index = tokenizer.convert_tokens_to_ids("[USER]") -assistent_index = tokenizer.convert_tokens_to_ids("[ASSISTENT]") - -model = NorT5ForConditionalGeneration.from_pretrained("chat_nort5_large", ignore_mismatched_sizes=True) - -device = "cuda" if torch.cuda.is_available() else "cpu" -print(f"SYSTEM: Running on {device}", flush=True) - -model = model.to(device) -model.eval() - -print(f"Sucessfully loaded the model to the memory") - -retrieval = Retrival() - - - -def strip_context(text): - result = re.match(r".*(.*).*", text) - if result is not None: - return result.group(1) - return text - - -def fix_capitalization(text): - text = text.strip()[len("NorT5:"):].strip() - if len(text) == 0: - return text - - paragraphs = text.split("
") - paragraphs = [p.strip() for p in paragraphs] - paragraphs = [p[0].upper() + p[1:] if len(p) > 0 else p for p in paragraphs] - text = "
".join(paragraphs) - text = text.replace("

", "

") - return text - - -def remove_last_message(history): - if len(history) > 1: - while history[-1][0] is None or history[-1][1] is None: - history = history[:-1] - history[-1][1] = "" - return history - - -def undo_history(history): - if len(history) > 1: - while history[-1][0] is None or history[-1][1] is None: - history = history[:-1] - history = history[:-1] - return history - - -def initialize_chat(history): - return [[None, "Hei, hva kan jeg gjøre for deg? 😊"]] - - -def commit_to_logs(path, state): - global dataset - try: - with open(path, "a") as f: - line = json.dumps(state, ensure_ascii=False) - f.write(f"{line}\n") - dataset.push_to_hub() - - except: - shutil.rmtree("data") - dataset = Repository( - local_dir="data", clone_from=LOGS_REPO, use_auth_token=HF_TOKEN - ) - with open(path, "a") as f: - line = json.dumps(state, ensure_ascii=False) - f.write(f"{line}\n") - dataset.push_to_hub() - - -def user(message, history, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages): - message = ftfy.fix_text(message) - if log_messages: - meta_info = system_info(temperature, top_p, top_k, repetition_penalty, acd, initial_prompt) - previous_messages = [[h[0], strip_context(h[1])] for h in history if h[0] is not None and h[1] is not None] - commit_to_logs( - "data/prompts.jsonl", - {"last_message": message, "history": previous_messages, "system_info": meta_info} - ) - - - return "", history + [[message, ""]] - - -def send_suggestion(suggestion, downvote_reason, history, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages): - if log_messages: - meta_info = system_info(temperature, top_p, top_k, repetition_penalty, acd, initial_prompt) - previous_messages = [[h[0], strip_context(h[1])] for h in history if h[0] is not None and h[1] is not None] - commit_to_logs( - "data/downvotes_details.jsonl", - {"suggestion": suggestion.strip() if len(suggestion.strip()) > 0 else None, "history": previous_messages, "system_info": meta_info, "reasons": downvote_reason} - ) - - if len(suggestion.strip()) > 0: - history = history[:-1] - history[-1][1] = suggestion.strip() - - return "", history - - -def send_upvote(history, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages): - if log_messages: - meta_info = system_info(temperature, top_p, top_k, repetition_penalty, acd, initial_prompt) - previous_messages = [[h[0], strip_context(h[1])] for h in history if h[0] is not None and h[1] is not None] - commit_to_logs( - "data/upvotes.jsonl", - {"last_message": previous_messages[-1][1], "history": previous_messages[:-1] + [[previous_messages[-1][0], ""]], "system_info": meta_info} - ) - - return history + [["👍", None]] - - -def send_downvote(history, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages): - if log_messages: - meta_info = system_info(temperature, top_p, top_k, repetition_penalty, acd, initial_prompt) - previous_messages = [[h[0], strip_context(h[1])] for h in history if h[0] is not None and h[1] is not None] - commit_to_logs( - "data/downvotes.jsonl", - {"last_message": previous_messages[-1][1], "history": previous_messages[:-1] + [[previous_messages[-1][0], ""]], "system_info": meta_info} - ) - - return history + [["👎", None]] - - -def send_flag(history, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages): - if log_messages: - meta_info = system_info(temperature, top_p, top_k, repetition_penalty, acd, initial_prompt) - previous_messages = [[h[0], strip_context(h[1])] for h in history if h[0] is not None and h[1] is not None] - commit_to_logs( - "data/flags.jsonl", - {"last_message": previous_messages[-1][1], "history": previous_messages[:-1] + [[previous_messages[-1][0], ""]], "system_info": meta_info} - ) - - return history + [["🚩", None]] - - -def chat(curr_system_message, history, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt): - message = None - with torch.inference_mode(): - with torch.autocast(enabled=device != "cpu", device_type=device, dtype=torch.float16): - if retrieval.is_question(history[-1][0]): - title, url, answer = retrieval.get_answer(history[-1][0]) - if answer is not None: - left_context, answer, right_context = answer - message = "" - if len(left_context.strip()) > 0: - message += f"... *{left_context.strip()}*   " - message += f"**{answer.strip()}**" - if len(right_context.strip()) > 0: - message += f"   *{right_context.strip()}* ..." - - history[-1][1] = message - history += [[None, f"source: {title} (Wikipedia)"]] - yield history - return answer.strip() - - else: - message = "Jeg prøvde å finne svaret på Wikipedia, men fant det ikke. I stedet skal jeg prøve å svare ut fra hukommelsen, men jeg er ikke så flink til å huske ting, så det er ikke sikkert det blir helt korrekt:

" - - class RepetitionPenaltyLogitsProcessor(LogitsProcessor): - def __init__(self, penalty: float, model): - last_bias = model.classifier.nonlinearity[-1].bias.data - last_bias = torch.nn.functional.log_softmax(last_bias) - self.penalty = penalty * (last_bias - last_bias.max()) - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: - penalized_score = torch.gather(scores + self.penalty.unsqueeze(0).to(input_ids.device), 1, input_ids) - scores.scatter_(1, input_ids, penalized_score) - return scores - - if initial_prompt is None or len(initial_prompt) == 0: - messages = "[SEP] ".join([ - f"brukeren: {item[0]}[SEP] NorT5: {strip_context(item[1])}" if len(item[1]) > 0 else f"brukeren: {item[0]}" - for item in history - if item[0] is not None and item[1] is not None - ]) - else: - messages = f"{initial_prompt.strip()}[SEP] " + "[SEP] ".join([ - f"brukeren: {item[0].strip()}[SEP] NorT5: {strip_context(item[1])}" if len(item[1]) > 0 else f"brukeren: {item[0].strip()}" - for item in history - if item[0] is not None and item[1] is not None - ]) - - print(messages) - - # Tokenize the messages string - prompt = tokenizer(messages, add_special_tokens=False).input_ids - prompt = prompt[-(768-2):] - prompt = [cls_index] + prompt + [sep_index] - prompt = torch.tensor([prompt], device=device) - print(', '.join(str(p) for p in prompt)) - - streamer = TextIteratorStreamer(tokenizer, timeout=60.0, skip_special_tokens=True) - - def generate(model, **kwargs): - with torch.inference_mode(): - #with torch.autocast(enabled=device != "cpu", device_type=device, dtype=torch.float16): - return model.generate(**kwargs) - - generate_kwargs = dict( - inputs=prompt, - streamer=streamer, - max_new_tokens=1024-1, - decoder_input_ids=torch.tensor([tokenizer("[BOS] NorT5:", add_special_tokens=False).input_ids], device=device), - top_k=top_k, - top_p=top_p, - do_sample=True, - temperature=temperature, - num_beams=1, - use_cache=True, - use_acd=acd, - logits_processor=[RepetitionPenaltyLogitsProcessor(repetition_penalty, model), transformers.LogitNormalization()] - ) - t = Thread(target=generate, args=(model,), kwargs=generate_kwargs) - t.start() - - partial_text = "" - for new_text in streamer: - partial_text += new_text - history[-1][1] = ("" if message is None else message) + fix_capitalization(partial_text) - yield history - - -description = """ -
-

Chat-NorT5

-

- -

-

-
- En norsk chatbot basert på NorT5-large språkmodellen, trent helt fra bunnen av LTG gruppen ved UiO. Modellen ble trent på chat-nort5-dataset. -
- Test hva som kan gjøres med en fullstendig norsk modell som er 200 ganger mindre enn GPT-3! -
-
- Modellen er hovedsaklig ment for forskning og pedagogiske formål. Den er ikke like kraftig som de store "state-of-the-art" språkmodellene. Den gir ofte ukorrekte svar, den hallusinerer, og den kan generere støtende innhold. Grunnet mangel på norske datasett for såkalt instruksjons-tuning er det her brukt datasett som er maskinoversatt fra engelsk. -

-
-""" - -#

-#
-# A Norwegian chatbot based on the NorT5-large language model, trained entirely from scratch by the LTG group at UiO. -#
-# Test what is possible with a Norwegian-only model that is 200 times smaller than GPT-3! -#
-#
-# The model is mainly intended for research and educational purposes, it is not as powerful as state-of-the-art large language models. It often answers incorrectly, hallucinates and it might generate harmful content. -#

-#
-# """ - - -with gr.Blocks(theme='sudeepshouche/minimalist') as demo: - gr.HTML(description) - # gr.HTML('') - log_messages = gr.Checkbox(label="Jeg godtar at samtalene mine lagres for videre utviklingsformål", value=False) - chatbot = gr.Chatbot(value=[[None, "Hei, hva kan jeg gjøre for deg? 😊"]]) - - with gr.Row(): - with gr.Column(scale=5): - msg = gr.Textbox( - label="Your message", placeholder="Write a query to NorT5...", show_label=False - ) # .style(container=False) - with gr.Column(scale=1): - submit = gr.Button("Submit", variant="primary").style(full_width=True) - - with gr.Row(): - upvote = gr.Button("👍 upvote", interactive=False) - downvote = gr.Button("👎 downvote", interactive=False) - flag = gr.Button("🚩 flag", interactive=False, visible=False) - repeat = gr.Button("♻️ repeat", interactive=False) - undo = gr.Button("🔙 undo", interactive=False) - clear = gr.Button("✖️ clear", interactive=False) - - with gr.Row(visible=False) as suggestion: - with gr.Column(scale=5): - downvote_reason = gr.CheckboxGroup( - ["not factual", "improper language fluency", "irrelevant", "harmful or inappropriate", "not helpful"], label="What are the reasons for the downvote?", show_label=True, interactive=True - ) - suggested_msg = gr.Textbox( - label="Please suggest a proper chatbot response (optionally)", placeholder="your suggestion", show_label=True - ) - suggested_submit = gr.Button("Submit downvote details", variant="primary").style(full_width=True) - - with gr.Accordion("Advanced Options:", open=False): - with gr.Row(): - with gr.Column(): - temperature = gr.Slider( - label="Temperature", - value=0.3, - minimum=0.05, - maximum=2.0, - step=0.025, - interactive=True, - info="Higher values produce more diverse outputs", - ) - top_p = gr.Slider( - label="Top-p (nucleus sampling)", - value=0.95, - minimum=0.1, - maximum=1, - step=0.01, - interactive=True, - info=( - "Sample from the smallest possible set of tokens whose cumulative probability exceeds top_p. Set to 1 to disable and sample from all tokens." - ), - ) - top_k = gr.Slider( - label="Top-k", - value=64, - minimum=0, - maximum=256, - step=1, - interactive=True, - info="Sample from a shortlist of top-k tokens — 0 to disable and sample from all tokens.", - ) - with gr.Column(): - repetition_penalty = gr.Slider( - label="Repetition penalty", - value=0.8, - minimum=0.0, - maximum=2.0, - step=0.01, - interactive=True, - info="Penalize repeated tokens – set to 0.0 to not penalize", - ) - acd = gr.Checkbox( - label="Autocontrastive decoding", - value=True, - info="Turn on autocontrastive decoding, as in https://arxiv.org/abs/2305.01628" - ) - initial_prompt = gr.Textbox( - "System: Du er en hjelpsom språkmodell som heter ChatNorT5.", label="The initial prompt that sets the chatbot's behavior", placeholder="" - ) - - def update_stete_after_publish_click(log_messages, history): - should_respond = log_messages and len(history) > 1 and history[-1][0] is not None and history[-1][1] is not None and len(history[-1][0].strip()) > 0 and len(history[-1][1].strip()) > 0 - return { - upvote: gr.update(interactive=should_respond), - flag: gr.update(interactive=should_respond), - downvote: gr.update(interactive=should_respond) - } - - def update_state_after_user(log_messages): - return { - msg: gr.update(interactive=False), - submit: gr.update(interactive=False), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=False), - repeat: gr.update(interactive=False), - undo: gr.update(interactive=False), - suggestion: gr.update(visible=False) - } - - def update_state_after_chat(log_messages): - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=log_messages), - flag: gr.update(interactive=log_messages), - downvote: gr.update(interactive=log_messages), - clear: gr.update(interactive=True), - repeat: gr.update(interactive=True), - undo: gr.update(interactive=True), - suggestion: gr.update(visible=False) - } - - def update_state_after_reset(log_messages): - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=False), - repeat: gr.update(interactive=False), - undo: gr.update(interactive=False), - suggestion: gr.update(visible=False) - } - - def update_state_after_upvote(log_messages): - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=True), - repeat: gr.update(interactive=True), - undo: gr.update(interactive=True), - suggestion: gr.update(visible=False) - } - - def update_state_after_flag(log_messages): - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=True), - repeat: gr.update(interactive=True), - undo: gr.update(interactive=True), - suggestion: gr.update(visible=False) - } - - def update_state_after_downvote(log_messages): - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=True), - repeat: gr.update(interactive=True), - undo: gr.update(interactive=True), - suggestion: gr.update(visible=True) - } - - def update_state_after_suggestion(log_messages): - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=True), - repeat: gr.update(interactive=True), - undo: gr.update(interactive=True), - suggestion: gr.update(visible=False) - } - - def update_state_after_undo(history, log_messages): - if len(history) > 1: - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=True), - repeat: gr.update(interactive=False), - undo: gr.update(interactive=True), - suggestion: gr.update(visible=False) - } - return { - msg: gr.update(interactive=True), - submit: gr.update(interactive=True), - upvote: gr.update(interactive=False), - flag: gr.update(interactive=False), - downvote: gr.update(interactive=False), - clear: gr.update(interactive=True), - repeat: gr.update(interactive=False), - undo: gr.update(interactive=False), - suggestion: gr.update(visible=False) - } - - - log_messages.change( - fn=update_stete_after_publish_click, inputs=[log_messages, chatbot], outputs=[upvote, flag, downvote], queue=False - ) - - submit_event = msg.submit( - fn=user, inputs=[msg, chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages], outputs=[msg, chatbot], queue=False - ).then( - fn=update_state_after_user, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ).then( - fn=chat, inputs=[initial_prompt, chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt], outputs=[chatbot], queue=True - ).then( - fn=update_state_after_chat, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - submit_click_event = submit.click( - fn=user, inputs=[msg, chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages], outputs=[msg, chatbot], queue=False - ).then( - fn=update_state_after_user, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ).then( - fn=chat, inputs=[initial_prompt, chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt], outputs=[chatbot], queue=True - ).then( - fn=update_state_after_chat, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - clear.click( - fn=initialize_chat, inputs=chatbot, outputs=chatbot, queue=False - ).then( - fn=update_state_after_reset, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - repeat.click( - fn=update_state_after_user, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ).then( - fn=remove_last_message, inputs=chatbot, outputs=chatbot, queue=False - ).then( - fn=chat, inputs=[initial_prompt, chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt], outputs=[chatbot], queue=True - ).then( - fn=update_state_after_chat, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - undo.click( - fn=undo_history, inputs=chatbot, outputs=chatbot, queue=False - ).then( - fn=update_state_after_undo, inputs=[chatbot, log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - upvote.click( - fn=send_upvote, inputs=[chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages], outputs=chatbot, queue=False - ).then( - fn=update_state_after_upvote, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - # flag.click( - # fn=send_flag, inputs=[chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages], outputs=chatbot, queue=False - # ).then( - # fn=update_state_after_flag, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - # ) - - downvote.click( - fn=send_downvote, inputs=[chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages], outputs=chatbot, queue=False - ).then( - fn=update_state_after_downvote, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - suggested_msg.submit( - fn=send_suggestion, inputs=[suggested_msg, downvote_reason, chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages], outputs=[suggested_msg, chatbot], queue=False - ).then( - fn=update_state_after_suggestion, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - suggested_submit.click( - fn=send_suggestion, inputs=[suggested_msg, downvote_reason, chatbot, temperature, top_p, top_k, repetition_penalty, acd, initial_prompt, log_messages], outputs=[suggested_msg, chatbot], queue=False - ).then( - fn=update_state_after_suggestion, inputs=[log_messages], outputs=[msg, submit, upvote, flag, downvote, clear, repeat, undo, suggestion], queue=False - ) - - -demo.queue(max_size=32, concurrency_count=2) -demo.launch() diff --git a/spaces/lukeslp/tts/README.md b/spaces/lukeslp/tts/README.md deleted file mode 100644 index c22ddca63d3f94e227f393810396d02e7fc428ee..0000000000000000000000000000000000000000 --- a/spaces/lukeslp/tts/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ElevenLabs TTS -emoji: 🗣️ -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -duplicated_from: elevenlabs/tts ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lvwerra/hf-review/html_template.html b/spaces/lvwerra/hf-review/html_template.html deleted file mode 100644 index 9007d9bf1eacabd64d65d31e3641dce4286ed66d..0000000000000000000000000000000000000000 --- a/spaces/lvwerra/hf-review/html_template.html +++ /dev/null @@ -1,24 +0,0 @@ -
-
-

Your Personal image Report

-
-
-
-

♥︎ Likes received: {{likes_user_total}} (top {{likes_user_top}}%)

-
-

Your model that got most likes ({{likes_repo_most_n}}): {{likes_repo_most}}

-
-
-

⤓ Total downloads: {{downloads_user_total}} (top {{downloads_user_top}}%)

-
-

Your model that was downloaded most ({{downlods_repo_most_n}}): {{downlods_repo_most}}

-
-
-

⚅ Number of models: {{repos_model_total}} (top {{repos_model_top}}%)

-
-

Your favourite model type is: {{repos_model_fav_type}}

-
-
-

⌂ You also have {{repos_datasets_total}} datasets and {{repos_spaces_total}} spaces!

-
-

Your favourite spaces SDK is {{repos_spaces_fav_sdk}}.

diff --git a/spaces/ma-xu/LIVE/pybind11/tests/cross_module_gil_utils.cpp b/spaces/ma-xu/LIVE/pybind11/tests/cross_module_gil_utils.cpp deleted file mode 100644 index 07db9f6e48a10dfd2d4370c3daff6e793d6675d2..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/cross_module_gil_utils.cpp +++ /dev/null @@ -1,73 +0,0 @@ -/* - tests/cross_module_gil_utils.cpp -- tools for acquiring GIL from a different module - - Copyright (c) 2019 Google LLC - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ -#include -#include - -// This file mimics a DSO that makes pybind11 calls but does not define a -// PYBIND11_MODULE. The purpose is to test that such a DSO can create a -// py::gil_scoped_acquire when the running thread is in a GIL-released state. -// -// Note that we define a Python module here for convenience, but in general -// this need not be the case. The typical scenario would be a DSO that implements -// shared logic used internally by multiple pybind11 modules. - -namespace { - -namespace py = pybind11; -void gil_acquire() { py::gil_scoped_acquire gil; } - -constexpr char kModuleName[] = "cross_module_gil_utils"; - -#if PY_MAJOR_VERSION >= 3 -struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - kModuleName, - NULL, - 0, - NULL, - NULL, - NULL, - NULL, - NULL -}; -#else -PyMethodDef module_methods[] = { - {NULL, NULL, 0, NULL} -}; -#endif - -} // namespace - -extern "C" PYBIND11_EXPORT -#if PY_MAJOR_VERSION >= 3 -PyObject* PyInit_cross_module_gil_utils() -#else -void initcross_module_gil_utils() -#endif -{ - - PyObject* m = -#if PY_MAJOR_VERSION >= 3 - PyModule_Create(&moduledef); -#else - Py_InitModule(kModuleName, module_methods); -#endif - - if (m != NULL) { - static_assert( - sizeof(&gil_acquire) == sizeof(void*), - "Function pointer must have the same size as void*"); - PyModule_AddObject(m, "gil_acquire_funcaddr", - PyLong_FromVoidPtr(reinterpret_cast(&gil_acquire))); - } - -#if PY_MAJOR_VERSION >= 3 - return m; -#endif -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/set_operations.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/set_operations.h deleted file mode 100644 index 421fa8a4bd955706497d0c9b30614035ccbbc46f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/set_operations.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits set_operations -#include - diff --git a/spaces/marioboy/neil-breen/vocoder/distribution.py b/spaces/marioboy/neil-breen/vocoder/distribution.py deleted file mode 100644 index d3119a5ba1e77bc25a92d2664f83d366f12399c0..0000000000000000000000000000000000000000 --- a/spaces/marioboy/neil-breen/vocoder/distribution.py +++ /dev/null @@ -1,132 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F - - -def log_sum_exp(x): - """ numerically stable log_sum_exp implementation that prevents overflow """ - # TF ordering - axis = len(x.size()) - 1 - m, _ = torch.max(x, dim=axis) - m2, _ = torch.max(x, dim=axis, keepdim=True) - return m + torch.log(torch.sum(torch.exp(x - m2), dim=axis)) - - -# It is adapted from https://github.com/r9y9/wavenet_vocoder/blob/master/wavenet_vocoder/mixture.py -def discretized_mix_logistic_loss(y_hat, y, num_classes=65536, - log_scale_min=None, reduce=True): - if log_scale_min is None: - log_scale_min = float(np.log(1e-14)) - y_hat = y_hat.permute(0,2,1) - assert y_hat.dim() == 3 - assert y_hat.size(1) % 3 == 0 - nr_mix = y_hat.size(1) // 3 - - # (B x T x C) - y_hat = y_hat.transpose(1, 2) - - # unpack parameters. (B, T, num_mixtures) x 3 - logit_probs = y_hat[:, :, :nr_mix] - means = y_hat[:, :, nr_mix:2 * nr_mix] - log_scales = torch.clamp(y_hat[:, :, 2 * nr_mix:3 * nr_mix], min=log_scale_min) - - # B x T x 1 -> B x T x num_mixtures - y = y.expand_as(means) - - centered_y = y - means - inv_stdv = torch.exp(-log_scales) - plus_in = inv_stdv * (centered_y + 1. / (num_classes - 1)) - cdf_plus = torch.sigmoid(plus_in) - min_in = inv_stdv * (centered_y - 1. / (num_classes - 1)) - cdf_min = torch.sigmoid(min_in) - - # log probability for edge case of 0 (before scaling) - # equivalent: torch.log(F.sigmoid(plus_in)) - log_cdf_plus = plus_in - F.softplus(plus_in) - - # log probability for edge case of 255 (before scaling) - # equivalent: (1 - F.sigmoid(min_in)).log() - log_one_minus_cdf_min = -F.softplus(min_in) - - # probability for all other cases - cdf_delta = cdf_plus - cdf_min - - mid_in = inv_stdv * centered_y - # log probability in the center of the bin, to be used in extreme cases - # (not actually used in our code) - log_pdf_mid = mid_in - log_scales - 2. * F.softplus(mid_in) - - # tf equivalent - """ - log_probs = tf.where(x < -0.999, log_cdf_plus, - tf.where(x > 0.999, log_one_minus_cdf_min, - tf.where(cdf_delta > 1e-5, - tf.log(tf.maximum(cdf_delta, 1e-12)), - log_pdf_mid - np.log(127.5)))) - """ - # TODO: cdf_delta <= 1e-5 actually can happen. How can we choose the value - # for num_classes=65536 case? 1e-7? not sure.. - inner_inner_cond = (cdf_delta > 1e-5).float() - - inner_inner_out = inner_inner_cond * \ - torch.log(torch.clamp(cdf_delta, min=1e-12)) + \ - (1. - inner_inner_cond) * (log_pdf_mid - np.log((num_classes - 1) / 2)) - inner_cond = (y > 0.999).float() - inner_out = inner_cond * log_one_minus_cdf_min + (1. - inner_cond) * inner_inner_out - cond = (y < -0.999).float() - log_probs = cond * log_cdf_plus + (1. - cond) * inner_out - - log_probs = log_probs + F.log_softmax(logit_probs, -1) - - if reduce: - return -torch.mean(log_sum_exp(log_probs)) - else: - return -log_sum_exp(log_probs).unsqueeze(-1) - - -def sample_from_discretized_mix_logistic(y, log_scale_min=None): - """ - Sample from discretized mixture of logistic distributions - Args: - y (Tensor): B x C x T - log_scale_min (float): Log scale minimum value - Returns: - Tensor: sample in range of [-1, 1]. - """ - if log_scale_min is None: - log_scale_min = float(np.log(1e-14)) - assert y.size(1) % 3 == 0 - nr_mix = y.size(1) // 3 - - # B x T x C - y = y.transpose(1, 2) - logit_probs = y[:, :, :nr_mix] - - # sample mixture indicator from softmax - temp = logit_probs.data.new(logit_probs.size()).uniform_(1e-5, 1.0 - 1e-5) - temp = logit_probs.data - torch.log(- torch.log(temp)) - _, argmax = temp.max(dim=-1) - - # (B, T) -> (B, T, nr_mix) - one_hot = to_one_hot(argmax, nr_mix) - # select logistic parameters - means = torch.sum(y[:, :, nr_mix:2 * nr_mix] * one_hot, dim=-1) - log_scales = torch.clamp(torch.sum( - y[:, :, 2 * nr_mix:3 * nr_mix] * one_hot, dim=-1), min=log_scale_min) - # sample from logistic & clip to interval - # we don't actually round to the nearest 8bit value when sampling - u = means.data.new(means.size()).uniform_(1e-5, 1.0 - 1e-5) - x = means + torch.exp(log_scales) * (torch.log(u) - torch.log(1. - u)) - - x = torch.clamp(torch.clamp(x, min=-1.), max=1.) - - return x - - -def to_one_hot(tensor, n, fill_with=1.): - # we perform one hot encore with respect to the last axis - one_hot = torch.FloatTensor(tensor.size() + (n,)).zero_() - if tensor.is_cuda: - one_hot = one_hot.cuda() - one_hot.scatter_(len(tensor.size()), tensor.unsqueeze(-1), fill_with) - return one_hot diff --git a/spaces/masbejo99/modelscope-text-to-video-synthesis/README.md b/spaces/masbejo99/modelscope-text-to-video-synthesis/README.md deleted file mode 100644 index e03af3f7a198494395de89e2fcbf3385566dca26..0000000000000000000000000000000000000000 --- a/spaces/masbejo99/modelscope-text-to-video-synthesis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ModelScope Text To Video Synthesis -emoji: 🚀 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.22.1 -app_file: app.py -pinned: false -duplicated_from: damo-vilab/modelscope-text-to-video-synthesis ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mehzhats/dogbreedidentifier/app.py b/spaces/mehzhats/dogbreedidentifier/app.py deleted file mode 100644 index 20d67bf0bf3718b917f94c9d2b4c0c8c05b0b444..0000000000000000000000000000000000000000 --- a/spaces/mehzhats/dogbreedidentifier/app.py +++ /dev/null @@ -1,109 +0,0 @@ -from turtle import heading -import gradio as gr -import cv2 -import json -import numpy as np -import tensorflow as tf -from PIL import Image - -resnet50_preprocess_input = tf.keras.applications.resnet50.preprocess_input -resnet50 = tf.keras.applications.resnet50.ResNet50() - -resnet50_no_top = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, pooling="avg") -RESNET50_dog_predictor = tf.keras.models.load_model('./weights.best.RESNET50.hdf5') - -with open('labels.json', 'r') as f: - labels = json.load(f) - - -def ResNet50_predict_labels(img_array): - img = resnet50_preprocess_input(img_array) - return np.argmax(resnet50.predict(img)) - - -def face_detector(img): - face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml') - - gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - faces = face_cascade.detectMultiScale(gray) - - return len(faces) > 0 - - -def dog_detector(img_path): - prediction = ResNet50_predict_labels(img_path) - return ((prediction <= 268) & (prediction >= 151)) - - -def Resnet50_predict_breed(image_array, has_face): - # extract bottleneck features - bottleneck_feature = resnet50_no_top(image_array) - bottleneck_feature = np.expand_dims(bottleneck_feature, axis=0) - bottleneck_feature = np.expand_dims(bottleneck_feature, axis=0) - # obtain predicted vector - predicted_vector = RESNET50_dog_predictor.predict(bottleneck_feature) - - if has_face == True: - lbl = labels[np.argmax(predicted_vector)] - lbl = lbl.split("/")[-1].split(".")[-1] - lbl = " ".join(lbl.split("_")) - return lbl - - # return dog breed that is predicted by the model - top5 = np.argpartition(predicted_vector[0], -5)[-5:] - top_labels = [labels[t] for t in top5] - - top_labels = [top_label.split("/")[-1].split(".")[-1] for top_label in top_labels] - top_labels = [" ".join(top_label.split("_")) for top_label in top_labels] - - preds = {} - - for label, ind in zip(top_labels, top5): - print(label, predicted_vector[0][ind].item()) - preds[label] = predicted_vector[0][ind].item() - - print("Final predictions: ", preds) - return preds - - -def greet(image): - img = Image.fromarray(image) - img = img.resize((224, 224)) - img = np.array(img) - - has_face = face_detector(img) - - img = np.expand_dims(img, axis=0) - - is_dog = dog_detector(img) - - if (is_dog == False and has_face == False): - return "Error! Unknown image." - - dog_breed = Resnet50_predict_breed(img, has_face) - - - if (is_dog == True and has_face == False): - return dog_breed - - # the model for Dog detection is more accurate than human face detector. - # Therefore, it is safer to assume that if both dog_detector and face_detector - # return true, it is a dog and not a human. - if (is_dog == True and has_face == True): - return "Dog breed is: {}".format(dog_breed) - - if (is_dog == False and has_face == True): - return "The human resembles to {} dog breed.".format(dog_breed) - - -demo = gr.Interface( - greet, - title="Dog Breed Identifier", - description="A Convolutional Neural Network model trained to identify a dog's breed.Upload an image to check out the results.", - examples = [['./Labrador_retriever_06449.jpg'],['./Brittany_02625.jpg'], ['./sample_human_2.png']], - inputs="image", - outputs="label" -) - -demo.launch() - diff --git a/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js b/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js deleted file mode 100644 index 45e4fafb63a667109fdf81c03ed1d375027ae462..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js +++ /dev/null @@ -1,168 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -// console.clear() - -window.init = function(){ - var initFns = [window.initUtil, window.initScatter, window.initPair] - if (!initFns.every(d => d)) return - - window.util = initUtil() - - window.tidy = d3.csvParse(python_data.tidyCSV, d => { - return { - e0: +d.e0, - e1: +d.e1, - i0: +d.i0, - i1: +d.i1, - tokenIndex: +d.tokenIndex, - sentenceIndex: +d.sentenceIndex, - } - }) - - var bySentence = d3.nestBy(tidy, d => d.sentenceIndex) - bySentence.forEach(sent => { - sent.sentenceIndex = +sent.key - sent.s0 = python_data.sentences[sent.sentenceIndex].s0 - sent.s1 = python_data.sentences[sent.sentenceIndex].s1 - sent.orig = python_data.sentences[sent.sentenceIndex].orig - - sent.corrA = ss.sampleCorrelation(sent.map(d => d.i0), sent.map(d => d.i1)) - // sent.corrA = ss.sampleCorrelation(sent.map(d => d.e0), sent.map(d => d.e1)) - }) - - var sel = d3.select('.container').html(` -
-
-
-
-
-
-
- `) - .st({width: 1100}) - d3.selectAll('.left,.right').st({width: 500, display: 'inline-block', verticalAlign: 'top'}) - - function initBeeswarm(bySentence, sel){ - var c = d3.conventions({ - sel: sel.append('div'), - height: 80, - totalWidth: 400, - margin: {left: 0, top: 18} - }) - - c.x.domain(d3.extent(bySentence.map(d => +d.corrA))).nice() - // c.x.domain([0, 1]) - c.xAxis.ticks(5) - d3.drawAxis(c) - util.ggPlotBg(c) - c.svg.select('.y').remove() - c.svg.selectAll('.tick').st({display: 'block'}) - - var simulation = d3.forceSimulation(bySentence) - .force("x", d3.forceX(d => c.x(d.corrA)).strength(1)) - .force("y", d3.forceY(c.height / 2)) - .force("collide", d3.forceCollide(4)) - .stop() - - for (var i = 0; i < 120; ++i) simulation.tick() - - c.svg.append('text').text('text') - .text('Distribution of Spearman Correlation Coefficients') - .at({dy: -5, fontWeight: 600}) - - c.svg.appendMany('circle.sentence', bySentence) - .translate(d => [d.x, d.y]) - .at({ - r: 3, - fill: 'none', - stroke: '#000' - }) - .on('mouseover', setSentenceAsPair) - } - initBeeswarm(bySentence, d3.select('.beeswarm')) - - - function initList(bySentence, sel){ - // var sentenceSel = sel.st({height: 500, overflowY: 'scroll', cursor: 'default'}) - // .appendMany('div.sentence', _.sortBy(bySentence, d => d.corrA)) - // .on('mouseover', setSentenceAsPair) - // .st({padding: 2, fontSize: 12}) - - // sentenceSel.append('span') - // .text(d => (d3.format('+.2f')(d.corrA)).replace('0.', '.')) - // .st({marginRight: 10, color: '#aaa'}) - - // sentenceSel.append('span') - // .text(d => d.orig.replace('[', '').replace(']', '')) - - var tableSel = sel - .st({height: 470 + 17, overflowY: 'scroll', cursor: 'default', position: 'relative', left: -40}) - .append('table') - .st({fontSize: 12}) - - tableSel.append('tr.header') - .html(` - corr - template - `) - - var rowSel = tableSel - .appendMany('tr.sentence', _.sortBy(bySentence, d => d.corrA)) - .on('mouseover', setSentenceAsPair) - .st({padding: 2, fontSize: 12}) - .html(d => ` - ${(d3.format('+.2f')(d.corrA)).replace('0.', '.')} - ${d.orig.replace('[', '').replace(']', '')} - `) - } - initList(bySentence, d3.select('.list')) - - - - function setSentenceAsPair(s){ - s.e0 = d3.range(python_data.vocab.length).map(d => -Infinity) - s.e1 = d3.range(python_data.vocab.length).map(d => -Infinity) - s.forEach(d => { - s.e0[d.tokenIndex] = d.e0 - s.e1[d.tokenIndex] = d.e1 - }) - - s.label0 = s.s0 - s.label1 = s.s1 - s.vocab = python_data.vocab - s.count = python_settings.count || 150 - s.isDifference = python_settings.isDifference - - var sel = d3.select('.pair').html('').st({width: 400}) - - initPair(s, sel) - - d3.selectAll('.sentence').classed('active', d => d == s) - - d3.selectAll('div.sentence').filter(d => d == s) - .each(function(){ - this.scrollIntoView({ block: 'nearest', inline: 'nearest'}) - }) - } - - setSentenceAsPair(bySentence[0]) - -} - - -window.init() - diff --git a/spaces/merve/anonymization/source/dataset-worldviews/interface-images.js b/spaces/merve/anonymization/source/dataset-worldviews/interface-images.js deleted file mode 100644 index 5e7040a3a979423e2c88cdbf8c4e5e840a5b35d0..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/dataset-worldviews/interface-images.js +++ /dev/null @@ -1,8 +0,0 @@ -function createInterfaceImage(divName){ - - var c = d3.conventions({ - sel: d3.select('.' + divName).html('') - }) - - -} \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/public/measuring-fairness/students.js b/spaces/merve/fill-in-the-blank/public/measuring-fairness/students.js deleted file mode 100644 index 4af55cba8cc763d96aa478be96a785048d9edc42..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/measuring-fairness/students.js +++ /dev/null @@ -1,90 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - -window.makeStudents = function(){ - var seed = new Math.seedrandom('he4a15') - var rand = d3.randomUniform.source(seed)(0, 1) - var letters = 'abcdefgijlmnopqrsuvwxyz' - letters = (letters + letters.toUpperCase()).split('') - - var nSickCols = 6 - var mSickCols = 8 - var fSickCols = nSickCols*2 - mSickCols - - var students = d3.range(nCols*nCols).map(i => { - var letter = letters[~~d3.randomUniform.source(seed)(0, letters.length)()] - - var isMale = i % 2 == 0 - var isSick = i < (isMale ? mSickCols : fSickCols)*nCols - var grade = isSick*.5 + rand() - var pos = {} - - return {letter, isSick, isMale, grade, pos} - }) - - students = _.sortBy(students, d => -d.grade) - d3.nestBy(students, d => d.isSick).forEach(group => { - var isSick = group[0].isSick - - var sickCols = nSickCols - var cols = isSick ? sickCols : nCols - sickCols - var xOffset = isSick ? 0 : sickCols - - group.forEach((d, i) => { - d.pos.allIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols)] - var spreadIJ = d.pos.allIJ.slice() - if (!d.isSick) spreadIJ[0] += .1 - d.pos.all = spreadIJ.map(d => d*c.width/10) - }) - }) - - d3.nestBy(students, d => d.isSick + '-' + d.isMale).forEach(group => { - var isSick = group[0].isSick - var isMale = group[0].isMale - - var sickCols = isMale ? mSickCols : fSickCols - var cols = isSick ? sickCols : nCols - sickCols - var xOffset = isSick ? 0 : sickCols - var yOffset = isMale ? nCols/2 + 2 : 0 - - group.forEach((d, i) => { - d.pos.sexIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols) + yOffset] - d.pos.sexGroupIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols)] - var spreadIJ = d.pos.sexIJ.slice() - if (!d.isSick) spreadIJ[0] += .1 - d.pos.sex = spreadIJ.map(d => d*c.width/10) - }) - }) - - students.maleOffsetJ = nCols/2 + 2 - students.maleOffsetPx= students.maleOffsetJ*c.width/10 - - students.fSickCols = fSickCols - students.mSickCols = mSickCols - - students.colWidth = c.width/10 - - students.rand = rand - return students -} - - - - - - -if (window.init) window.init() diff --git a/spaces/merve/uncertainty-calibration/source/hidden-bias/annotations.js b/spaces/merve/uncertainty-calibration/source/hidden-bias/annotations.js deleted file mode 100644 index b0fd377b443ee9bd31e7bd1d9dbacafc4e5282e3..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/hidden-bias/annotations.js +++ /dev/null @@ -1,86 +0,0 @@ -window.annotations = [ - { - "slide": 0, - "x": 1.77, - "y": 3.17, - "path": "M -3,-59 A 31.215 31.215 0 1 0 -10,2", - "text": "Joshua had a high school GPA of 3.2 and 1.8 in college", - "textOffset": [ - -1, - -48 - ] - }, - { - "slide": 0, - "x": 2.93, - "y": 2.08, - "path": "M 56,61 A 45.102 45.102 0 0 0 19.000001907348633,1.0000003576278687", - "text": "Abigail has a 2.1 in high school and 2.9 in college", - "textOffset": [ - -5, - 85 - ], - "width": 18 - }, - { - "slide": 1, - "x": 3.7, - "y": 2, - "path": "M 1,41 A 209.709 209.709 0 0 1 -310,76", - "text": "Most students have a higher GPA in high school", - "textOffset": [ - -69, - 11 - ], - "width": 18 - }, - { - "slide": 2, - "x": 1, - "y": 4, - "path": "M 0 0", - "text": "A well adjusted model will usually over predict about half the students' grades...", - "textOffset": [ - 25, - 50 - ], - "width": 25 - }, - { - "slide": 2, - "x": 4, - "y": 1, - "path": "M 0 0", - "text": "...and under predict the other half", - "textOffset": [ - -109, - -51 - ], - "width": 18 - }, - { - "slide": 5, - "x": 2.58, - "y": 2, - "path": "M 54,34 A 29.707 29.707 0 0 0 11,-6", - "text": "The model predicted both Lucas and Mia would get a 2.0, but she ended up with a higher GPA", - "html": "The model predicted both Lucas and Mia would get a 2.0, but she ended up with a higher GPA", - "textOffset": [ - -22, - 44 - ], - "width": 23 - }, - { - "slide": 5, - "x": 2.14, - "y": 2, - "path": "M 40,61 A 35.025 35.025 0 0 1 -4,7", - "text": "", - "textOffset": [ - -100, - 179 - ], - "width": 14 - } -] \ No newline at end of file diff --git a/spaces/mithril-security/blind_chat/src/routes/login/callback/updateUser.ts b/spaces/mithril-security/blind_chat/src/routes/login/callback/updateUser.ts deleted file mode 100644 index 90d7e0a440a5bc31eff064d699aa7575e5b9a210..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/routes/login/callback/updateUser.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { authCondition, refreshSessionCookie } from "$lib/server/auth"; -import { collections } from "$lib/server/database"; -import { DEFAULT_SETTINGS } from "$lib/types/Settings"; -import { z } from "zod"; -import type { UserinfoResponse } from "openid-client"; -import type { Cookies } from "@sveltejs/kit"; - -export async function updateUser(params: { - userData: UserinfoResponse; - locals: App.Locals; - cookies: Cookies; -}) {} diff --git a/spaces/momegas/megabots/setup.py b/spaces/momegas/megabots/setup.py deleted file mode 100644 index d89fe9a8dd2fd4c179402fd9f9b5d1ac43697d3e..0000000000000000000000000000000000000000 --- a/spaces/momegas/megabots/setup.py +++ /dev/null @@ -1,40 +0,0 @@ -from setuptools import setup, find_packages - -VERSION = "0.0.11" - -setup( - name="megabots", - version=VERSION, - packages=find_packages(), - install_requires=[ - "langchain", - "tiktoken", - "unstructured", - "fastapi", - "faiss-cpu", - "pdfminer.six", - "gradio", - "python-dotenv", - "openai", - ], - author="Megaklis Vasilakis", - author_email="megaklis.vasilakis@gmail.com", - description="🤖 Megabots provides State-of-the-art, production ready bots made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵", - long_description=open("README.md").read(), - long_description_content_type="text/markdown", - url="https://github.com/momegas/megabots", - keywords=[ - "bot", - "qna-bot", - "information-retrieval", - "chatbot", - "question-answering", - "prompt-engineering", - ], - license="MIT", - classifiers=[ - # Choose appropriate classifiers from - # https://pypi.org/classifiers/ - "Development Status :: 4 - Beta" - ], -) diff --git a/spaces/momegas/megas-bot/README.md b/spaces/momegas/megas-bot/README.md deleted file mode 100644 index 44574601a5062e940007737a8e9ccc31ffd1ec0d..0000000000000000000000000000000000000000 --- a/spaces/momegas/megas-bot/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Qnabot -emoji: 💻 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit -python_version: 3.10.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/discriminative_reranking_nmt/models/__init__.py b/spaces/mshukor/UnIVAL/fairseq/examples/discriminative_reranking_nmt/models/__init__.py deleted file mode 100644 index c593ea5f1842794bfcc952fc93c679a5f16aeb98..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/discriminative_reranking_nmt/models/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .discriminative_reranking_model import DiscriminativeNMTReranker - - -__all__ = [ - "DiscriminativeNMTReranker", -] diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/wsc/__init__.py b/spaces/mshukor/UnIVAL/fairseq/examples/roberta/wsc/__init__.py deleted file mode 100644 index 78afa4728eeed96142900118f6452730023466c9..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/wsc/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import wsc_criterion # noqa -from . import wsc_task # noqa diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/unsupervised_quality_estimation/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/unsupervised_quality_estimation/README.md deleted file mode 100644 index e86a0d13b883af0c37fdc2c1fee9b0b9dff4d18c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/unsupervised_quality_estimation/README.md +++ /dev/null @@ -1,126 +0,0 @@ -# Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020) - -This page includes instructions for reproducing results from the paper [Unsupervised Quality Estimation for Neural -Machine Translation (Fomicheva et al., 2020)](https://arxiv.org/abs/2005.10608) - -## Requirements: - -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* subword-nmt: https://github.com/rsennrich/subword-nmt -* flores: https://github.com/facebookresearch/flores - -## Download Models and Test Data - -Download translation models and test data from [MLQE dataset repository](https://github.com/facebookresearch/mlqe). - -## Set up: - -Given a testset consisting of source sentences and reference translations: - -* `SRC_LANG`: source language -* `TGT_LANG`: target language -* `INPUT`: input prefix, such that the file `$INPUT.$SRC_LANG` contains source sentences and `$INPUT.$TGT_LANG` -contains the reference sentences -* `OUTPUT_DIR`: output path to store results -* `MOSES_DECODER`: path to mosesdecoder installation -* `BPE_ROOT`: path to subword-nmt installation -* `BPE`: path to BPE model -* `MODEL_DIR`: directory containing the NMT model `.pt` file as well as the source and target vocabularies. -* `TMP`: directory for intermediate temporary files -* `GPU`: if translating with GPU, id of the GPU to use for inference -* `DROPOUT_N`: number of stochastic forward passes - -`$DROPOUT_N` is set to 30 in the experiments reported in the paper. However, we observed that increasing it beyond 10 -does not bring substantial improvements. - -## Translate the data using standard decoding - -Preprocess the input data: -``` -for LANG in $SRC_LANG $TGT_LANG; do - perl $MOSES_DECODER/scripts/tokenizer/tokenizer.perl -threads 80 -a -l $LANG < $INPUT.$LANG > $TMP/preprocessed.tok.$LANG - python $BPE_ROOT/apply_bpe.py -c ${BPE} < $TMP/preprocessed.tok.$LANG > $TMP/preprocessed.tok.bpe.$LANG -done -``` - -Binarize the data for faster translation: - -``` -fairseq-preprocess --srcdict $MODEL_DIR/dict.$SRC_LANG.txt --tgtdict $MODEL_DIR/dict.$TGT_LANG.txt ---source-lang ${SRC_LANG} --target-lang ${TGT_LANG} --testpref $TMP/preprocessed.tok.bpe --destdir $TMP/bin --workers 4 -``` - -Translate - -``` -CUDA_VISIBLE_DEVICES=$GPU fairseq-generate $TMP/bin --path ${MODEL_DIR}/${SRC_LANG}-${TGT_LANG}.pt --beam 5 ---source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 > $TMP/fairseq.out -grep ^H $TMP/fairseq.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/mt.out -``` - -Post-process - -``` -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/mt.out | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $OUTPUT_DIR/mt.out -``` - -## Produce uncertainty estimates - -### Scoring - -Make temporary files to store the translations repeated N times. - -``` -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/preprocessed.tok.bpe.$SRC_LANG -n $DROPOUT_N --o $TMP/repeated.$SRC_LANG -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/mt.out -n $DROPOUT_N -o $TMP/repeated.$TGT_LANG - -fairseq-preprocess --srcdict ${MODEL_DIR}/dict.${SRC_LANG}.txt $TGT_DIC --source-lang ${SRC_LANG} ---target-lang ${TGT_LANG} --testpref ${TMP}/repeated --destdir ${TMP}/bin-repeated -``` - -Produce model scores for the generated translations using `--retain-dropout` option to apply dropout at inference time: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt --beam 5 - --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 --score-reference --retain-dropout - --retain-dropout-modules '["TransformerModel","TransformerEncoder","TransformerDecoder","TransformerEncoderLayer"]' - TransformerDecoderLayer --seed 46 > $TMP/dropout.scoring.out - -grep ^H $TMP/dropout.scoring.out | cut -d- -f2- | sort -n | cut -f2 > $TMP/dropout.scores - -``` - -Use `--retain-dropout-modules` to specify the modules. By default, dropout is applied in the same places -as for training. - -Compute the mean of the resulting output distribution: - -``` -python $SCRIPTS/scripts/uncertainty/aggregate_scores.py -i $TMP/dropout.scores -o $OUTPUT_DIR/dropout.scores.mean --n $DROPOUT_N -``` - -### Generation - -Produce multiple translation hypotheses for the same source using `--retain-dropout` option: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt - --beam 5 --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --retain-dropout - --unkpen 5 --retain-dropout-modules TransformerModel TransformerEncoder TransformerDecoder -TransformerEncoderLayer TransformerDecoderLayer --seed 46 > $TMP/dropout.generation.out - -grep ^H $TMP/dropout.generation.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/dropout.hypotheses_ - -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/dropout.hypotheses_ | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $TMP/dropout.hypotheses -``` - -Compute similarity between multiple hypotheses corresponding to the same source sentence using Meteor -evaluation metric: -``` -python meteor.py -i $TMP/dropout.hypotheses -m -n $DROPOUT_N -o -$OUTPUT_DIR/dropout.gen.sim.meteor -``` diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/data_utils.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/data_utils.py deleted file mode 100644 index b3de57681e0fb6b026003eff19f7745caf6799d3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/data_utils.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from collections.abc import Iterable -except ImportError: - from collections import Iterable -import contextlib -import itertools -import logging -import re -import warnings -from typing import Optional, Tuple - -import numpy as np -import torch - -from fairseq.file_io import PathManager -from fairseq import utils -import os - -logger = logging.getLogger(__name__) - - -def infer_language_pair(path): - """Infer language pair from filename: .-.(...).idx""" - src, dst = None, None - for filename in PathManager.ls(path): - parts = filename.split(".") - if len(parts) >= 3 and len(parts[1].split("-")) == 2: - return parts[1].split("-") - return src, dst - - -def collate_tokens( - values, - pad_idx, - eos_idx=None, - left_pad=False, - move_eos_to_beginning=False, - pad_to_length=None, - pad_to_multiple=1, - pad_to_bsz=None, -): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) - size = size if pad_to_length is None else max(size, pad_to_length) - if pad_to_multiple != 1 and size % pad_to_multiple != 0: - size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple) - - batch_size = len(values) if pad_to_bsz is None else max(len(values), pad_to_bsz) - res = values[0].new(batch_size, size).fill_(pad_idx) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if move_eos_to_beginning: - if eos_idx is None: - # if no eos_idx is specified, then use the last token in src - dst[0] = src[-1] - else: - dst[0] = eos_idx - dst[1:] = src[:-1] - else: - dst.copy_(src) - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)]) - return res - -def load_indexed_dataset( - path, dictionary=None, dataset_impl=None, combine=False, default="cached" -): - """A helper function for loading indexed datasets. - - Args: - path (str): path to indexed dataset (e.g., 'data-bin/train') - dictionary (~fairseq.data.Dictionary): data dictionary - dataset_impl (str, optional): which dataset implementation to use. If - not provided, it will be inferred automatically. For legacy indexed - data we use the 'cached' implementation by default. - combine (bool, optional): automatically load and combine multiple - datasets. For example, if *path* is 'data-bin/train', then we will - combine 'data-bin/train', 'data-bin/train1', ... and return a - single ConcatDataset instance. - """ - import fairseq.data.indexed_dataset as indexed_dataset - from fairseq.data.concat_dataset import ConcatDataset - - datasets = [] - for k in itertools.count(): - path_k = path + (str(k) if k > 0 else "") - try: - path_k = indexed_dataset.get_indexed_dataset_to_local(path_k) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"path_k: {e} not found") - else: - raise e - - dataset_impl_k = dataset_impl - if dataset_impl_k is None: - dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k) - dataset = indexed_dataset.make_dataset( - path_k, - impl=dataset_impl_k or default, - fix_lua_indexing=True, - dictionary=dictionary, - ) - if dataset is None: - break - logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k)) - datasets.append(dataset) - if not combine: - break - if len(datasets) == 0: - return None - elif len(datasets) == 1: - return datasets[0] - else: - return ConcatDataset(datasets) - - -@contextlib.contextmanager -def numpy_seed(seed, *addl_seeds): - """Context manager which seeds the NumPy PRNG with the specified seed and - restores the state afterward""" - if seed is None: - yield - return - if len(addl_seeds) > 0: - seed = int(hash((seed, *addl_seeds)) % 1e6) - state = np.random.get_state() - np.random.seed(seed) - try: - yield - finally: - np.random.set_state(state) - - -def collect_filtered(function, iterable, filtered): - """ - Similar to :func:`filter` but collects filtered elements in ``filtered``. - - Args: - function (callable): function that returns ``False`` for elements that - should be filtered - iterable (iterable): iterable to filter - filtered (list): list to store filtered elements - """ - for el in iterable: - if function(el): - yield el - else: - filtered.append(el) - - -def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False): - def compare_leq(a, b): - return a <= b if not isinstance(a, tuple) else max(a) <= b - - def check_size(idx): - if isinstance(max_positions, float) or isinstance(max_positions, int): - return size_fn(idx) <= max_positions - elif isinstance(max_positions, dict): - idx_size = size_fn(idx) - assert isinstance(idx_size, dict) - intersect_keys = set(max_positions.keys()) & set(idx_size.keys()) - return all( - all( - a is None or b is None or a <= b - for a, b in zip(idx_size[key], max_positions[key]) - ) - for key in intersect_keys - ) - else: - # For MultiCorpusSampledDataset, will generalize it later - if not isinstance(size_fn(idx), Iterable): - return all(size_fn(idx) <= b for b in max_positions) - return all( - a is None or b is None or a <= b - for a, b in zip(size_fn(idx), max_positions) - ) - - ignored = [] - itr = collect_filtered(check_size, indices, ignored) - indices = np.fromiter(itr, dtype=np.int64, count=-1) - return indices, ignored - - -def filter_by_size(indices, dataset, max_positions, raise_exception=False): - """ - [deprecated] Filter indices based on their size. - Use `FairseqDataset::filter_indices_by_size` instead. - - Args: - indices (List[int]): ordered list of dataset indices - dataset (FairseqDataset): fairseq dataset instance - max_positions (tuple): filter elements larger than this size. - Comparisons are done component-wise. - raise_exception (bool, optional): if ``True``, raise an exception if - any elements are filtered (default: False). - """ - warnings.warn( - "data_utils.filter_by_size is deprecated. " - "Use `FairseqDataset::filter_indices_by_size` instead.", - stacklevel=2, - ) - if isinstance(max_positions, float) or isinstance(max_positions, int): - if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray): - ignored = indices[dataset.sizes[indices] > max_positions].tolist() - indices = indices[dataset.sizes[indices] <= max_positions] - elif ( - hasattr(dataset, "sizes") - and isinstance(dataset.sizes, list) - and len(dataset.sizes) == 1 - ): - ignored = indices[dataset.sizes[0][indices] > max_positions].tolist() - indices = indices[dataset.sizes[0][indices] <= max_positions] - else: - indices, ignored = _filter_by_size_dynamic( - indices, dataset.size, max_positions - ) - else: - indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) - - if len(ignored) > 0 and raise_exception: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - if len(ignored) > 0: - logger.warning( - ( - "{} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - -def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if max_sizes is None: - return indices, [] - if type(max_sizes) in (int, float): - max_src_size, max_tgt_size = max_sizes, max_sizes - else: - max_src_size, max_tgt_size = max_sizes - if tgt_sizes is None: - ignored = indices[src_sizes[indices] > max_src_size] - else: - ignored = indices[ - (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size) - ] - if len(ignored) > 0: - if tgt_sizes is None: - indices = indices[src_sizes[indices] <= max_src_size] - else: - indices = indices[ - (src_sizes[indices] <= max_src_size) - & (tgt_sizes[indices] <= max_tgt_size) - ] - return indices, ignored.tolist() - - -def batch_by_size( - indices, - num_tokens_fn, - num_tokens_vec=None, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - fixed_shapes=None, -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - num_tokens_vec (List[int], optional): precomputed vector of the number - of tokens for each index in indices (to enable faster batch generation) - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be less than N or a multiple of N (default: 1). - fixed_shapes (List[Tuple[int, int]], optional): if given, batches will - only be created with the given shapes. *max_sentences* and - *required_batch_size_multiple* will be ignored (default: None). - """ - try: - from fairseq.data.data_utils_fast import ( - batch_by_size_fn, - batch_by_size_vec, - batch_fixed_shapes_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: " - "`python setup.py build_ext --inplace`" - ) - except ValueError: - raise ValueError( - "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`." - ) - - # added int() to avoid TypeError: an integer is required - max_tokens = ( - int(max_tokens) if max_tokens is not None else -1 - ) - max_sentences = max_sentences if max_sentences is not None else -1 - bsz_mult = required_batch_size_multiple - - if not isinstance(indices, np.ndarray): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray): - num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1) - - if fixed_shapes is None: - if num_tokens_vec is None: - return batch_by_size_fn( - indices, - num_tokens_fn, - max_tokens, - max_sentences, - bsz_mult, - ) - else: - return batch_by_size_vec( - indices, - num_tokens_vec, - max_tokens, - max_sentences, - bsz_mult, - ) - - else: - fixed_shapes = np.array(fixed_shapes, dtype=np.int64) - sort_order = np.lexsort( - [ - fixed_shapes[:, 1].argsort(), # length - fixed_shapes[:, 0].argsort(), # bsz - ] - ) - fixed_shapes_sorted = fixed_shapes[sort_order] - return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted) - - -def post_process(sentence: str, symbol: str): - if symbol == "sentencepiece": - sentence = sentence.replace(" ", "").replace("\u2581", " ").strip() - elif symbol == "wordpiece": - sentence = sentence.replace(" ", "").replace("_", " ").strip() - elif symbol == "letter": - sentence = sentence.replace(" ", "").replace("|", " ").strip() - elif symbol == "silence": - import re - sentence = sentence.replace("", "") - sentence = re.sub(' +', ' ', sentence).strip() - elif symbol == "_EOW": - sentence = sentence.replace(" ", "").replace("_EOW", " ").strip() - elif symbol in {"subword_nmt", "@@ ", "@@"}: - if symbol == "subword_nmt": - symbol = "@@ " - sentence = (sentence + " ").replace(symbol, "").rstrip() - elif symbol == "none": - pass - elif symbol is not None: - raise NotImplementedError(f"Unknown post_process option: {symbol}") - return sentence - - -def compute_mask_indices( - shape: Tuple[int, int], - padding_mask: Optional[torch.Tensor], - mask_prob: float, - mask_length: int, - mask_type: str = "static", - mask_other: float = 0.0, - min_masks: int = 0, - no_overlap: bool = False, - min_space: int = 0, -) -> np.ndarray: - """ - Computes random mask spans for a given shape - - Args: - shape: the the shape for which to compute masks. - should be of size 2 where first element is batch size and 2nd is timesteps - padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements - mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by - number of timesteps divided by length of mask span to mask approximately this percentage of all elements. - however due to overlaps, the actual number will be smaller (unless no_overlap is True) - mask_type: how to compute mask lengths - static = fixed size - uniform = sample from uniform distribution [mask_other, mask_length*2] - normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element - poisson = sample from possion distribution with lambda = mask length - min_masks: minimum number of masked spans - no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping - min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans - """ - - bsz, all_sz = shape - mask = np.full((bsz, all_sz), False) - - all_num_mask = int( - # add a random number for probabilistic rounding - mask_prob * all_sz / float(mask_length) - + np.random.rand() - ) - - all_num_mask = max(min_masks, all_num_mask) - - mask_idcs = [] - for i in range(bsz): - if padding_mask is not None: - sz = all_sz - padding_mask[i].long().sum().item() - num_mask = int( - # add a random number for probabilistic rounding - mask_prob * sz / float(mask_length) - + np.random.rand() - ) - num_mask = max(min_masks, num_mask) - else: - sz = all_sz - num_mask = all_num_mask - - if mask_type == "static": - lengths = np.full(num_mask, mask_length) - elif mask_type == "uniform": - lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask) - elif mask_type == "normal": - lengths = np.random.normal(mask_length, mask_other, size=num_mask) - lengths = [max(1, int(round(x))) for x in lengths] - elif mask_type == "poisson": - lengths = np.random.poisson(mask_length, size=num_mask) - lengths = [int(round(x)) for x in lengths] - else: - raise Exception("unknown mask selection " + mask_type) - - if sum(lengths) == 0: - lengths[0] = min(mask_length, sz - 1) - - if no_overlap: - mask_idc = [] - - def arrange(s, e, length, keep_length): - span_start = np.random.randint(s, e - length) - mask_idc.extend(span_start + i for i in range(length)) - - new_parts = [] - if span_start - s - min_space >= keep_length: - new_parts.append((s, span_start - min_space + 1)) - if e - span_start - keep_length - min_space > keep_length: - new_parts.append((span_start + length + min_space, e)) - return new_parts - - parts = [(0, sz)] - min_length = min(lengths) - for length in sorted(lengths, reverse=True): - lens = np.fromiter( - (e - s if e - s >= length + min_space else 0 for s, e in parts), - np.int, - ) - l_sum = np.sum(lens) - if l_sum == 0: - break - probs = lens / np.sum(lens) - c = np.random.choice(len(parts), p=probs) - s, e = parts.pop(c) - parts.extend(arrange(s, e, length, min_length)) - mask_idc = np.asarray(mask_idc) - else: - min_len = min(lengths) - if sz - min_len <= num_mask: - min_len = sz - num_mask - 1 - - mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) - - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ] - ) - - mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) - - min_len = min([len(m) for m in mask_idcs]) - for i, mask_idc in enumerate(mask_idcs): - if len(mask_idc) > min_len: - mask_idc = np.random.choice(mask_idc, min_len, replace=False) - mask[i, mask_idc] = True - - return mask - - -def get_mem_usage(): - try: - import psutil - - mb = 1024 * 1024 - return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb" - except ImportError: - return "N/A" - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_padding_mask(lens): - bsz, max_lens = lens.size(0), torch.max(lens).item() - mask = torch.arange(max_lens).to(lens.device).view(1, max_lens) - mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens) - return mask - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_mask(lens): - return ~lengths_to_padding_mask(lens) - - -def get_buckets(sizes, num_buckets): - buckets = np.unique( - np.percentile( - sizes, - np.linspace(0, 100, num_buckets + 1), - interpolation='lower', - )[1:] - ) - return buckets - - -def get_bucketed_sizes(orig_sizes, buckets): - sizes = np.copy(orig_sizes) - assert np.min(sizes) >= 0 - start_val = -1 - for end_val in buckets: - mask = (sizes > start_val) & (sizes <= end_val) - sizes[mask] = end_val - start_val = end_val - return sizes - - - -def _find_extra_valid_paths(dataset_path: str) -> set: - paths = utils.split_paths(dataset_path) - all_valid_paths = set() - for sub_dir in paths: - contents = PathManager.ls(sub_dir) - valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None] - all_valid_paths |= {os.path.basename(p) for p in valid_paths} - # Remove .bin, .idx etc - roots = {os.path.splitext(p)[0] for p in all_valid_paths} - return roots - - -def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: - """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.""" - if ( - train_cfg.dataset.ignore_unused_valid_subsets - or train_cfg.dataset.combine_valid_subsets - or train_cfg.dataset.disable_validation - or not hasattr(train_cfg.task, "data") - ): - return - other_paths = _find_extra_valid_paths(train_cfg.task.data) - specified_subsets = train_cfg.dataset.valid_subset.split(",") - ignored_paths = [p for p in other_paths if p not in specified_subsets] - if ignored_paths: - advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them." - msg = f"Valid paths {ignored_paths} will be ignored. {advice}" - raise ValueError(msg) diff --git a/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/models/diffusion/ddim.py b/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/models/diffusion/ddim.py deleted file mode 100644 index edf1eaff9e78ac2e6778914b706b6a4fff51a8fe..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,203 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_paris_256.sh b/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_paris_256.sh deleted file mode 100644 index 67061298b601ce4e1c37966852421f2153a0d686..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_paris_256.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env bash - -# paths to data are valid for mml-ws01 -OUT_DIR="/media/inpainting/paper_data/Paris_StreetView_Dataset_val_256" - -source "$(dirname $0)/env.sh" - -for datadir in paris_eval_gt -do - for conf in random_thin_256 random_medium_256 random_thick_256 segm_256 - do - "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-paris \ - location.out_dir=$OUT_DIR cropping.out_square_crop=False cropping.out_min_size=256 - - "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats" - done -done diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/base.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/base.py deleted file mode 100644 index 675f01682ddf5e31b6cc341735378c6f3b242e49..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/base.py +++ /dev/null @@ -1,73 +0,0 @@ -import abc -from typing import Dict, List - -import numpy as np -import torch -from skimage import color -from skimage.segmentation import mark_boundaries - -from . import colors - -COLORS, _ = colors.generate_colors(151) # 151 - max classes for semantic segmentation - - -class BaseVisualizer: - @abc.abstractmethod - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - """ - Take a batch, make an image from it and visualize - """ - raise NotImplementedError() - - -def visualize_mask_and_images(images_dict: Dict[str, np.ndarray], keys: List[str], - last_without_mask=True, rescale_keys=None, mask_only_first=None, - black_mask=False) -> np.ndarray: - mask = images_dict['mask'] > 0.5 - result = [] - for i, k in enumerate(keys): - img = images_dict[k] - img = np.transpose(img, (1, 2, 0)) - - if rescale_keys is not None and k in rescale_keys: - img = img - img.min() - img /= img.max() + 1e-5 - if len(img.shape) == 2: - img = np.expand_dims(img, 2) - - if img.shape[2] == 1: - img = np.repeat(img, 3, axis=2) - elif (img.shape[2] > 3): - img_classes = img.argmax(2) - img = color.label2rgb(img_classes, colors=COLORS) - - if mask_only_first: - need_mark_boundaries = i == 0 - else: - need_mark_boundaries = i < len(keys) - 1 or not last_without_mask - - if need_mark_boundaries: - if black_mask: - img = img * (1 - mask[0][..., None]) - img = mark_boundaries(img, - mask[0], - color=(1., 0., 0.), - outline_color=(1., 1., 1.), - mode='thick') - result.append(img) - return np.concatenate(result, axis=1) - - -def visualize_mask_and_images_batch(batch: Dict[str, torch.Tensor], keys: List[str], max_items=10, - last_without_mask=True, rescale_keys=None) -> np.ndarray: - batch = {k: tens.detach().cpu().numpy() for k, tens in batch.items() - if k in keys or k == 'mask'} - - batch_size = next(iter(batch.values())).shape[0] - items_to_vis = min(batch_size, max_items) - result = [] - for i in range(items_to_vis): - cur_dct = {k: tens[i] for k, tens in batch.items()} - result.append(visualize_mask_and_images(cur_dct, keys, last_without_mask=last_without_mask, - rescale_keys=rescale_keys)) - return np.concatenate(result, axis=0) diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/modules/activations.py b/spaces/nakas/MusicGenDemucs/audiocraft/modules/activations.py deleted file mode 100644 index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/modules/lstm.py b/spaces/nakas/MusicGenDemucs/audiocraft/modules/lstm.py deleted file mode 100644 index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/audiocraft/modules/lstm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class StreamableLSTM(nn.Module): - """LSTM without worrying about the hidden state, nor the layout of the data. - Expects input as convolutional layout. - """ - def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True): - super().__init__() - self.skip = skip - self.lstm = nn.LSTM(dimension, dimension, num_layers) - - def forward(self, x): - x = x.permute(2, 0, 1) - y, _ = self.lstm(x) - if self.skip: - y = y + x - y = y.permute(1, 2, 0) - return y diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/training/fine_tuning.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/training/fine_tuning.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/nateraw/run-script-in-background/README.md b/spaces/nateraw/run-script-in-background/README.md deleted file mode 100644 index 80b2cc8cf93afe1f9fce64901fc78a96bea678e6..0000000000000000000000000000000000000000 --- a/spaces/nateraw/run-script-in-background/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Asdf123 -emoji: 🚀 -colorFrom: indigo -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CCleaner V5.21.5700 Setup All Editions Activator [TOP].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CCleaner V5.21.5700 Setup All Editions Activator [TOP].md deleted file mode 100644 index e90e249e0659f2ec2df3b458a1d1d6512f6f88a2..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CCleaner V5.21.5700 Setup All Editions Activator [TOP].md +++ /dev/null @@ -1,29 +0,0 @@ - -

How to Install and Activate CCleaner v5.21.5700 for All Editions

-

CCleaner is a popular and powerful tool that can optimize your PC performance, clean up junk files, erase your browsing history, and fix registry errors. It supports various editions, such as Professional, Network, Business, and Technician. In this article, we will show you how to install and activate CCleaner v5.21.5700 for any edition you want.

-

Step 1: Download CCleaner v5.21.5700 Setup

-

You can download the setup file from the official website of CCleaner[^1^] or from other trusted sources[^2^]. The file size is about 7.85 MB and it works on Windows 10, 8.1, 8, 7, Vista, and XP (both 32-bit and 64-bit versions).

-

CCleaner v5.21.5700 Setup All Editions Activator


Download Ziphttps://urlcod.com/2uIbpd



-

Step 2: Install CCleaner v5.21.5700

-

After downloading the setup file, double-click on it to start the installation process. You will see a welcome screen where you can choose your language and click on "Install". You can also customize your installation options by clicking on "Customize" and selecting the components you want to install.

-

The installation process will take a few minutes and you will see a confirmation screen when it is done. You can choose to run CCleaner or view the release notes by checking the corresponding boxes. Click on "Finish" to close the installer.

-

Step 3: Activate CCleaner v5.21.5700 for All Editions

-

To activate CCleaner v5.21.5700 for any edition you want, you will need an activator tool that can generate valid license keys for you. You can download the activator tool from here[^2^] or from other sources that you trust.

-

After downloading the activator tool, unzip it and run it as administrator. You will see a simple interface where you can select the edition you want to activate from the drop-down menu. Then click on "Generate" to create a license key for that edition.

-

Copy the license key and open CCleaner. Go to "Options" > "About" and click on "Upgrade to Pro". Paste the license key in the box and click on "Register". You will see a message that says "Thank you for purchasing CCleaner!" and your edition will be activated.

-

Conclusion

-

CCleaner v5.21.5700 is a useful tool that can help you keep your PC running smoothly and securely. By following these steps, you can install and activate it for any edition you want with ease. Enjoy using CCleaner and keep your PC clean!

- -

How to Use CCleaner v5.21.5700

-

CCleaner v5.21.5700 has a simple and intuitive user interface that allows you to access its features easily. You can use CCleaner to perform four main tasks: Cleaner, Registry, Tools, and Options.

-

Cleaner

-

The Cleaner feature allows you to scan and remove unnecessary files from your system and browsers. You can choose which items you want to clean by checking or unchecking the boxes in the Windows and Applications tabs. Then click on "Analyze" to see how much space you can free up. If you are satisfied with the results, click on "Run Cleaner" to delete the files.

-

-

Registry

-

The Registry feature allows you to scan and fix errors and invalid entries in your Windows registry. You can choose which categories you want to scan by checking or unchecking the boxes in the Registry tab. Then click on "Scan for Issues" to see what problems CCleaner can fix. If you agree with the results, click on "Fix selected Issues" to repair the registry.

-

Tools

-

The Tools feature allows you to perform various tasks such as uninstalling programs, managing startup items, finding duplicate files, wiping free space, restoring system files, and more. You can access these tools by clicking on the corresponding icons in the Tools tab. Each tool has its own options and instructions that you can follow.

-

Options

-

The Options feature allows you to customize CCleaner's settings and preferences. You can access these options by clicking on the corresponding icons in the Options tab. Some of the options include cookies, include/exclude files and folders, monitoring, smart cleaning, updates, privacy, and advanced settings.

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Crack ESET4 Box4EVER V4.30A (Trial 31days).rar !!LINK!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Crack ESET4 Box4EVER V4.30A (Trial 31days).rar !!LINK!!.md deleted file mode 100644 index 6cb9b1f938c1ed8b024f6d345232b044774ed5b0..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Crack ESET4 Box4EVER V4.30A (Trial 31days).rar !!LINK!!.md +++ /dev/null @@ -1,58 +0,0 @@ -
-

How to Crack ESET4 Box4EVER V4.30A (Trial 31days).rar and Get Unlimited Protection

- -

If you are looking for a way to crack ESET4 Box4EVER V4.30A (Trial 31days).rar and get unlimited protection from viruses, malware, spyware, and other threats, you have come to the right place. In this article, we will show you how to download, install, and activate ESET4 Box4EVER V4.30A (Trial 31days).rar using a simple and easy method.

- -

What is ESET4 Box4EVER V4.30A (Trial 31days).rar?

- -

ESET4 Box4EVER V4.30A (Trial 31days).rar is a crack file that allows you to use ESET NOD32 Antivirus 4 for free without any limitations. ESET NOD32 Antivirus 4 is one of the best antivirus software in the market, offering fast and reliable protection against all kinds of online threats. It has a user-friendly interface, low system impact, and advanced features such as anti-phishing, anti-spyware, anti-rootkit, and anti-spam.

-

Crack ESET4 Box4EVER V4.30A (Trial 31days).rar


Download Zip ••• https://urlcod.com/2uIbik



- -

ESET4 Box4EVER V4.30A (Trial 31days).rar works by resetting the trial period of ESET NOD32 Antivirus 4 every time you restart your computer. This way, you can enjoy the full functionality of the software without paying anything or worrying about expiration dates.

- -

How to Download ESET4 Box4EVER V4.30A (Trial 31days).rar?

- -

To download ESET4 Box4EVER V4.30A (Trial 31days).rar, you need to follow these steps:

- -
    -
  1. Go to this website [^1^] and click on the "Download" button.
  2. -
  3. Wait for a few seconds until the download link appears and click on it.
  4. -
  5. Save the file to your preferred location on your computer.
  6. -
- -

Alternatively, you can also use this link [^2^] to download the file directly from Rapidshare.com.

- -

How to Install ESET4 Box4EVER V4.30A (Trial 31days).rar?

- -

To install ESET4 Box4EVER V4.30A (Trial 31days).rar, you need to follow these steps:

- -
    -
  1. Extract the file using WinRAR or any other software that can handle RAR files.
  2. -
  3. Run the file "ESET Fix v430A.exe" as administrator.
  4. -
  5. Select your language and click on "Next".
  6. -
  7. Accept the license agreement and click on "Next".
  8. -
  9. Select your installation folder and click on "Next".
  10. -
  11. Wait for the installation to finish and click on "Finish".
  12. -
- -

How to Activate ESET4 Box4EVER V4.30A (Trial 31days).rar?

- -

To activate ESET4 Box4EVER V4.30A (Trial 31days).rar, you need to follow these steps:

- -
    -
  1. Open ESET NOD32 Antivirus 4 from your desktop or start menu.
  2. -
  3. Click on "Update" and then on "Username and Password setup".
  4. -
  5. Enter any username and password of your choice and click on "OK".
  6. -
  7. Click on "Update now" and wait for the update to complete.
  8. -
  9. Restart your computer.
  10. -
- -

Congratulations! You have successfully cracked ESET NOD32 Antivirus 4 using ESET Fix v430A.exe. You can now enjoy unlimited protection from all kinds of online threats for free.

-

- -

Tips and Warnings

- -
    -
  • e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/neuesql/sqlgptapp/client.py b/spaces/neuesql/sqlgptapp/client.py deleted file mode 100644 index 203cfc5f6f30c628b94a240b5dd3df516baa6b59..0000000000000000000000000000000000000000 --- a/spaces/neuesql/sqlgptapp/client.py +++ /dev/null @@ -1,64 +0,0 @@ -from abc import ABC, abstractmethod -from typing import List -from loguru import logger -import openai - - -class SQLService(ABC): - - @abstractmethod - def translate(self, source_db: str, target_db: str, sql: str) -> List: - ... - - -class OpenAIService(SQLService): - - def __init__(self, api_key) -> None: - super().__init__() - openai.api_key = api_key - - - def translate(self, source_db: str, target_db: str, sql: str) -> List: - results = [] - try: - response = openai.Completion.create( - model="text-davinci-003", - prompt=f"##### Translate this function from Oracle into Postgresql\n" - f"### {source_db}" - f"" - f" {sql}" - f"" - f"### {target_db}", - temperature=0, - max_tokens=2048, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stop=["###"] - ) - for choice in response.choices: - logger.info(f"transform {source_db} to {target_db}, SQL:") - logger.debug(choice.text) - results.append(choice.text) - except Exception as ex: - logger.error(f"transform from {source_db} to {target_db}, failed \n {sql}") - logger.exception(ex) - return results - - -class GoogleT5Service(SQLService): - - def translate(self, source_db: str, target_db: str, sql: str) -> List: - return ["In Developing now"] - - def __init__(self): - ... - - -class FacebookLLAMAService(SQLService): - - def translate(self, source_db: str, target_db: str, sql: str) -> List: - return ["In facebook waiting list"] - - def __init__(self): - ... diff --git a/spaces/neural-ti/NeTI/utils/types.py b/spaces/neural-ti/NeTI/utils/types.py deleted file mode 100644 index 0a994f860c45d86746fcebc655bd7e9a3902de20..0000000000000000000000000000000000000000 --- a/spaces/neural-ti/NeTI/utils/types.py +++ /dev/null @@ -1,20 +0,0 @@ -import enum -from dataclasses import dataclass -from typing import Optional - -import torch - - -@dataclass -class NeTIBatch: - input_ids: torch.Tensor - placeholder_token_id: int - timesteps: torch.Tensor - unet_layers: torch.Tensor - truncation_idx: Optional[int] = None - - -@dataclass -class PESigmas: - sigma_t: float - sigma_l: float diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/lvis.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/lvis.py deleted file mode 100644 index 576d962c8ce23ce31a01839b232cec89817186de..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/lvis.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os -from fvcore.common.timer import Timer - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.structures import BoxMode -from detectron2.utils.file_io import PathManager - -from .builtin_meta import _get_coco_instances_meta -from .lvis_v0_5_categories import LVIS_CATEGORIES as LVIS_V0_5_CATEGORIES -from .lvis_v1_categories import LVIS_CATEGORIES as LVIS_V1_CATEGORIES -from .lvis_v1_category_image_count import LVIS_CATEGORY_IMAGE_COUNT as LVIS_V1_CATEGORY_IMAGE_COUNT - -""" -This file contains functions to parse LVIS-format annotations into dicts in the -"Detectron2 format". -""" - -logger = logging.getLogger(__name__) - -__all__ = ["load_lvis_json", "register_lvis_instances", "get_lvis_instances_meta"] - - -def register_lvis_instances(name, metadata, json_file, image_root): - """ - Register a dataset in LVIS's json annotation format for instance detection and segmentation. - - Args: - name (str): a name that identifies the dataset, e.g. "lvis_v0.5_train". - metadata (dict): extra metadata associated with this dataset. It can be an empty dict. - json_file (str): path to the json instance annotation file. - image_root (str or path-like): directory which contains all the images. - """ - DatasetCatalog.register(name, lambda: load_lvis_json(json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, evaluator_type="lvis", **metadata - ) - - -def load_lvis_json(json_file, image_root, dataset_name=None, extra_annotation_keys=None): - """ - Load a json file in LVIS's annotation format. - - Args: - json_file (str): full path to the LVIS json annotation file. - image_root (str): the directory where the images in this json file exists. - dataset_name (str): the name of the dataset (e.g., "lvis_v0.5_train"). - If provided, this function will put "thing_classes" into the metadata - associated with this dataset. - extra_annotation_keys (list[str]): list of per-annotation keys that should also be - loaded into the dataset dict (besides "bbox", "bbox_mode", "category_id", - "segmentation"). The values for these keys will be returned as-is. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - - Notes: - 1. This function does not read the image files. - The results do not have the "image" field. - """ - from lvis import LVIS - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - if dataset_name is not None: - meta = get_lvis_instances_meta(dataset_name) - MetadataCatalog.get(dataset_name).set(**meta) - - # sort indices for reproducible results - img_ids = sorted(lvis_api.imgs.keys()) - # imgs is a list of dicts, each looks something like: - # {'license': 4, - # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg', - # 'file_name': 'COCO_val2014_000000001268.jpg', - # 'height': 427, - # 'width': 640, - # 'date_captured': '2013-11-17 05:57:24', - # 'id': 1268} - imgs = lvis_api.load_imgs(img_ids) - # anns is a list[list[dict]], where each dict is an annotation - # record for an object. The inner list enumerates the objects in an image - # and the outer list enumerates over images. Example of anns[0]: - # [{'segmentation': [[192.81, - # 247.09, - # ... - # 219.03, - # 249.06]], - # 'area': 1035.749, - # 'image_id': 1268, - # 'bbox': [192.81, 224.8, 74.73, 33.43], - # 'category_id': 16, - # 'id': 42986}, - # ...] - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - # Sanity check that each annotation has a unique id - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique".format( - json_file - ) - - imgs_anns = list(zip(imgs, anns)) - - logger.info("Loaded {} images in the LVIS format from {}".format(len(imgs_anns), json_file)) - - if extra_annotation_keys: - logger.info( - "The following extra annotation keys will be loaded: {} ".format(extra_annotation_keys) - ) - else: - extra_annotation_keys = [] - - def get_file_name(img_root, img_dict): - # Determine the path including the split folder ("train2017", "val2017", "test2017") from - # the coco_url field. Example: - # 'coco_url': 'http://images.cocodataset.org/train2017/000000155379.jpg' - split_folder, file_name = img_dict["coco_url"].split("/")[-2:] - return os.path.join(img_root + split_folder, file_name) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - record["file_name"] = get_file_name(image_root, img_dict) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - record["not_exhaustive_category_ids"] = img_dict.get("not_exhaustive_category_ids", []) - record["neg_category_ids"] = img_dict.get("neg_category_ids", []) - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - # Check that the image_id in this annotation is the same as - # the image_id we're looking at. - # This fails only when the data parsing logic or the annotation file is buggy. - assert anno["image_id"] == image_id - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - # LVIS data loader can be used to load COCO dataset categories. In this case `meta` - # variable will have a field with COCO-specific category mapping. - if dataset_name is not None and "thing_dataset_id_to_contiguous_id" in meta: - obj["category_id"] = meta["thing_dataset_id_to_contiguous_id"][anno["category_id"]] - else: - obj["category_id"] = anno["category_id"] - 1 # Convert 1-indexed to 0-indexed - segm = anno["segmentation"] # list[list[float]] - # filter out invalid polygons (< 3 points) - valid_segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - assert len(segm) == len( - valid_segm - ), "Annotation contains an invalid polygon with < 3 points" - assert len(segm) > 0 - obj["segmentation"] = segm - for extra_ann_key in extra_annotation_keys: - obj[extra_ann_key] = anno[extra_ann_key] - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - return dataset_dicts - - -def get_lvis_instances_meta(dataset_name): - """ - Load LVIS metadata. - - Args: - dataset_name (str): LVIS dataset name without the split name (e.g., "lvis_v0.5"). - - Returns: - dict: LVIS metadata with keys: thing_classes - """ - if "cocofied" in dataset_name: - return _get_coco_instances_meta() - if "v0.5" in dataset_name: - return _get_lvis_instances_meta_v0_5() - elif "v1" in dataset_name: - return _get_lvis_instances_meta_v1() - raise ValueError("No built-in metadata for dataset {}".format(dataset_name)) - - -def _get_lvis_instances_meta_v0_5(): - assert len(LVIS_V0_5_CATEGORIES) == 1230 - cat_ids = [k["id"] for k in LVIS_V0_5_CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(LVIS_V0_5_CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["synonyms"][0] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - - -def _get_lvis_instances_meta_v1(): - assert len(LVIS_V1_CATEGORIES) == 1203 - cat_ids = [k["id"] for k in LVIS_V1_CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(LVIS_V1_CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["synonyms"][0] for k in lvis_categories] - meta = {"thing_classes": thing_classes, "class_image_count": LVIS_V1_CATEGORY_IMAGE_COUNT} - return meta - - -if __name__ == "__main__": - """ - Test the LVIS json dataset loader. - - Usage: - python -m detectron2.data.datasets.lvis \ - path/to/json path/to/image_root dataset_name vis_limit - """ - import sys - import numpy as np - from detectron2.utils.logger import setup_logger - from PIL import Image - import detectron2.data.datasets # noqa # add pre-defined metadata - from detectron2.utils.visualizer import Visualizer - - logger = setup_logger(name=__name__) - meta = MetadataCatalog.get(sys.argv[3]) - - dicts = load_lvis_json(sys.argv[1], sys.argv[2], sys.argv[3]) - logger.info("Done loading {} samples.".format(len(dicts))) - - dirname = "lvis-data-vis" - os.makedirs(dirname, exist_ok=True) - for d in dicts[: int(sys.argv[4])]: - img = np.array(Image.open(d["file_name"])) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/js__PxKuGXQ7t1kgFAt01KambNhC5Tl2ewy_HHNT5A4YbHo__na_sM4tP0YCd.js b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/js__PxKuGXQ7t1kgFAt01KambNhC5Tl2ewy_HHNT5A4YbHo__na_sM4tP0YCd.js deleted file mode 100644 index cdb26d46d952e2034ea1195cf4fc781a604ca4b4..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/js__PxKuGXQ7t1kgFAt01KambNhC5Tl2ewy_HHNT5A4YbHo__na_sM4tP0YCd.js +++ /dev/null @@ -1,913 +0,0 @@ -(function($) { - -// Check if this file has already been loaded. -if (typeof Drupal.wysiwygAttach !== 'undefined') { - return; -} - -// Keeps track of editor status during AJAX operations, active format and more. -// Always use getFieldInfo() to get a valid reference to the correct data. -var _fieldInfoStorage = {}; -// Keeps track of information relevant to each format, such as editor settings. -// Always use getFormatInfo() to get a reference to a format's data. -var _formatInfoStorage = {}; - -// Keeps track of global and per format plugin configurations. -// Always use getPluginInfo() tog get a valid reference to the correct data. -var _pluginInfoStorage = {'global': {'drupal': {}, 'native': {}}}; - -// Keeps track of private instance information. -var _internalInstances = {}; - -// Keeps track of initialized editor libraries. -var _initializedLibraries = {}; - -// Keeps a map between format selectboxes and fields. -var _selectToField = {}; - -/** - * Returns field specific editor data. - * - * @throws Error - * Exception thrown if data for an unknown field is requested. - * Summary fields are expected to use the same data as the main field. - * - * If a field id contains the delimiter '--', anything after that is dropped and - * the remainder is assumed to be the id of an original field replaced by an - * AJAX operation, due to how Drupal generates unique ids. - * @see drupal_html_id() - * - * Do not modify the returned object unless you really know what you're doing. - * No external code should need access to this, and it may likely change in the - * future. - * - * @param fieldId - * The id of the field to get data for. - * - * @returns - * A reference to an object with the following properties: - * - activeFormat: A string with the active format id. - * - enabled: A boolean, true if the editor is attached. - * - formats: An object with one sub-object for each available format, holding - * format specific state data for this field. - * - summary: An optional string with the id of a corresponding summary field. - * - trigger: A string with the id of the format selector for the field. - * - getFormatInfo: Shortcut method to getFormatInfo(fieldInfo.activeFormat). - */ -function getFieldInfo(fieldId) { - if (_fieldInfoStorage[fieldId]) { - return _fieldInfoStorage[fieldId]; - } - var baseFieldId = (fieldId.indexOf('--') === -1 ? fieldId : fieldId.substr(0, fieldId.indexOf('--'))); - if (_fieldInfoStorage[baseFieldId]) { - return _fieldInfoStorage[baseFieldId]; - } - throw new Error('Wysiwyg module has no information about field "' + fieldId + '"'); -} - -/** - * Returns format specific editor data. - * - * Do not modify the returned object unless you really know what you're doing. - * No external code should need access to this, and it may likely change in the - * future. - * - * @param formatId - * The id of a format to get data for. - * - * @returns - * A reference to an object with the following properties: - * - editor: A string with the id of the editor attached to the format. - * 'none' if no editor profile is associated with the format. - * - enabled: True if the editor is active. - * - toggle: True if the editor can be toggled on/off by the user. - * - editorSettings: A structure holding editor settings for this format. - * - getPluginInfo: Shortcut method to get plugin config for the this format. - */ -function getFormatInfo(formatId) { - if (_formatInfoStorage[formatId]) { - return _formatInfoStorage[formatId]; - } - return { - editor: 'none', - getPluginInfo: function () { - return getPluginInfo(formatId); - } - }; -} - -/** - * Returns plugin configuration for a specific format, or the global values. - * - * @param formatId - * The id of a format to get data for, or 'global' to get data common to all - * formats and editors. Use 'global:editorname' to limit it to one editor. - * - * @return - * The returned object will have the sub-objects 'drupal' and 'native', each - * with properties matching names of plugins. - * Global data for Drupal (cross-editor) plugins will have the following keys: - * - title: A human readable name for the button. - * - internalName: The unique name of a native plugin wrapper, used in editor - * profiles and when registering the plugin with the editor API to avoid - * possible id conflicts with native plugins. - * - css: A stylesheet needed by the plugin. - * - icon path: The path where button icons are stored. - * - path: The path to the plugin's main folder. - * - buttons: An object with button data, keyed by name with the properties: - * - description: A human readable string describing the button's function. - * - title: A human readable string with the name of the button. - * - icon: An object with one or more of the following properties: - * - src: An absolute (begins with '/') or relative path to the icon. - * - path: An absolute path to a folder containing the button. - * - * When formatId matched a format with an assigned editor, values for plugins - * match the return value of the editor integration's [proxy] plugin settings - * callbacks. - * - * @see Drupal.wysiwyg.utilities.getPluginInfo() - * @see Drupal.wyswiyg.utilities.extractButtonSettings() - */ -function getPluginInfo(formatId) { - var match, editor; - if ((match = formatId.match(/^global:(\w+)$/))) { - formatId = 'global'; - editor = match[1]; - } - if (!_pluginInfoStorage[formatId]) { - return {}; - } - if (formatId === 'global' && typeof editor !== 'undefined') { - return { 'drupal': _pluginInfoStorage.global.drupal, 'native': (_pluginInfoStorage.global['native'][editor]) }; - } - return _pluginInfoStorage[formatId]; -} - -/** - * Attach editors to input formats and target elements (f.e. textareas). - * - * This behavior searches for input format selectors and formatting guidelines - * that have been preprocessed by Wysiwyg API. All CSS classes of those elements - * with the prefix 'wysiwyg-' are parsed into input format parameters, defining - * the input format, configured editor, target element id, and variable other - * properties, which are passed to the attach/detach hooks of the corresponding - * editor. - * - * Furthermore, an "enable/disable rich-text" toggle link is added after the - * target element to allow users to alter its contents in plain text. - * - * This is executed once, while editor attach/detach hooks can be invoked - * multiple times. - * - * @param context - * A DOM element, supplied by Drupal.attachBehaviors(). - */ -Drupal.behaviors.attachWysiwyg = { - attach: function (context, settings) { - // This breaks in Konqueror. Prevent it from running. - if (/KDE/.test(navigator.vendor)) { - return; - } - var wysiwygs = $('.wysiwyg:input', context); - if (!wysiwygs.length) { - // No new fields, nothing to update. - return; - } - updateInternalState(settings.wysiwyg, context); - wysiwygs.once('wysiwyg', function () { - // Skip processing if the element is unknown or does not exist in this - // document. Can happen after a form was removed but Drupal.ajax keeps a - // lingering reference to the form and calls Drupal.attachBehaviors(). - var $this = $('#' + this.id, document); - if (!$this.length) { - return; - } - // Directly attach this editor, if the input format is enabled or there is - // only one input format at all. - Drupal.wysiwygAttach(context, this.id); - }) - .closest('form').submit(function (event) { - // Detach any editor when the containing form is submitted. - // Do not detach if the event was cancelled. - if (event.isDefaultPrevented()) { - return; - } - var form = this; - $('.wysiwyg:input', this).each(function () { - Drupal.wysiwygDetach(form, this.id, 'serialize'); - }); - }); - }, - - detach: function (context, settings, trigger) { - var wysiwygs; - // The 'serialize' trigger indicates that we should simply update the - // underlying element with the new text, without destroying the editor. - if (trigger == 'serialize') { - // Removing the wysiwyg-processed class guarantees that the editor will - // be reattached. Only do this if we're planning to destroy the editor. - wysiwygs = $('.wysiwyg-processed:input', context); - } - else { - wysiwygs = $('.wysiwyg:input', context).removeOnce('wysiwyg'); - } - wysiwygs.each(function () { - Drupal.wysiwygDetach(context, this.id, trigger); - if (trigger === 'unload') { - // Delete the instance in case the field is removed. This is safe since - // detaching with the unload trigger is reverts to the 'none' "editor". - delete _internalInstances[this.id]; - delete Drupal.wysiwyg.instances[this.id]; - } - }); - } -}; - -/** - * Attach an editor to a target element. - * - * Detaches any existing instance for the field before attaching a new instance - * based on the current state of the field. Editor settings and state - * information is fetched based on the element id and get cloned first, so they - * cannot be overridden. After attaching the editor, the toggle link is shown - * again, except in case we are attaching no editor. - * - * Also attaches editors to the summary field, if available. - * - * @param context - * A DOM element, supplied by Drupal.attachBehaviors(). - * @param fieldId - * The id of an element to attach an editor to. - */ -Drupal.wysiwygAttach = function(context, fieldId) { - var fieldInfo = getFieldInfo(fieldId), - doSummary = (fieldInfo.summary && (!fieldInfo.formats[fieldInfo.activeFormat] || !fieldInfo.formats[fieldInfo.activeFormat].skip_summary)); - // Detach any previous editor instance if enabled, else remove the grippie. - detachFromField(fieldId, context, 'unload'); - var wasSummary = !!_internalInstances[fieldInfo.summary]; - if (doSummary || wasSummary) { - detachFromField(fieldId, context, 'unload', {summary: true}); - } - // Store this field id, so (external) plugins can use it. - // @todo Wrong point in time. Probably can only supported by editors which - // support an onFocus() or similar event. - Drupal.wysiwyg.activeId = fieldId; - // Attach or update toggle link, if enabled. - Drupal.wysiwygAttachToggleLink(context, fieldId); - // Attach to main field. - attachToField(fieldId, context); - // Attach to summary field. - if (doSummary || wasSummary) { - // If the summary wrapper is visible, attach immediately. - if ($('#' + fieldInfo.summary).parents('.text-summary-wrapper').is(':visible')) { - attachToField(fieldId, context, {summary: true, forceDisabled: !doSummary}); - } - else { - // Attach an instance of the 'none' editor to have consistency while the - // summary is hidden, then switch to a real editor instance when shown. - attachToField(fieldId, context, {summary: true, forceDisabled: true}); - // Unbind any existing click handler to avoid double toggling. - $('#' + fieldId).parents('.text-format-wrapper').find('.link-edit-summary').closest('.field-edit-link').unbind('click.wysiwyg').bind('click.wysiwyg', function () { - detachFromField(fieldId, context, 'unload', {summary: true}); - attachToField(fieldId, context, {summary: true, forceDisabled: !doSummary}); - $(this).unbind('click.wysiwyg'); - }); - } - } -}; - -/** - * The public API exposed for an editor-enabled field. - * - * Properties should be treated as read-only state and changing them will not - * have any effect on how the instance behaves. - * - * Note: The attach() and detach() methods are not part of the public API and - * should not be called directly to avoid synchronization issues. - * Use Drupal.wysiwygAttach() and Drupal.wysiwygDetach() to activate or - * deactivate editor instances. Externally switching the active editor is not - * supported other than changing the format using the select element. - */ -function WysiwygInstance(internalInstance) { - // The id of the field the instance manipulates. - this.field = internalInstance.field; - // The internal name of the attached editor. - this.editor = internalInstance.editor; - // If the editor is currently enabled or not. - this['status'] = internalInstance['status']; - // The id of the text format the editor is attached to. - this.format = internalInstance.format; - // If the field is resizable without an editor attached. - this.resizable = internalInstance.resizable; - - // Methods below here redirect to the 'none' editor which handles plain text - // fields when the editor is disabled. - - /** - * Insert content at the cursor position. - * - * @param content - * An HTML markup string. - */ - this.insert = function (content) { - return internalInstance['status'] ? internalInstance.insert(content) : Drupal.wysiwyg.editor.instance.none.insert.call(internalInstance, content); - } - - /** - * Get all content from the editor. - * - * @return - * An HTML markup string. - */ - this.getContent = function () { - return internalInstance['status'] ? internalInstance.getContent() : Drupal.wysiwyg.editor.instance.none.getContent.call(internalInstance); - } - - /** - * Replace all content in the editor. - * - * @param content - * An HTML markup string. - */ - this.setContent = function (content) { - return internalInstance['status'] ? internalInstance.setContent(content) : Drupal.wysiwyg.editor.instance.none.setContent.call(internalInstance, content); - } - - /** - * Check if the editor is in fullscreen mode. - * - * @return bool - * True if the editor is considered to be in fullscreen mode. - */ - this.isFullscreen = function (content) { - return internalInstance['status'] && $.isFunction(internalInstance.isFullscreen) ? internalInstance.isFullscreen() : false; - } - - // @todo The methods below only work for TinyMCE, deprecate? - - /** - * Open a native editor dialog. - * - * Use of this method i not recommended due to limited editor support. - * - * @param dialog - * An object with dialog settings. Keys used: - * - url: The url of the dialog template. - * - width: Width in pixels. - * - height: Height in pixels. - */ - this.openDialog = function (dialog, params) { - if ($.isFunction(internalInstance.openDialog)) { - return internalInstance.openDialog(dialog, params) - } - } - - /** - * Close an opened dialog. - * - * @param dialog - * Same options as for opening a dialog. - */ - this.closeDialog = function (dialog) { - if ($.isFunction(internalInstance.closeDialog)) { - return internalInstance.closeDialog(dialog) - } - } -} - -/** - * The private base for editor instances. - * - * An instance of this object is used as the context for all calls into the - * editor instances (including attach() and detach() when only one instance is - * asked to detach). - * - * Anything added to Drupal.wysiwyg.editor.instance[editorName] is cloned into - * an instance of this function. - * - * Editor state parameters are cloned into the instance after that. - */ -function WysiwygInternalInstance(params) { - $.extend(true, this, Drupal.wysiwyg.editor.instance[params.editor]); - $.extend(true, this, params); - this.pluginInfo = { - 'global': getPluginInfo('global:' + params.editor), - 'instances': getPluginInfo(params.format) - }; - // Keep track of the public face to keep it synced. - this.publicInstance = new WysiwygInstance(this); -} - -/** - * Updates internal settings and state caches with new information. - * - * Attaches selection change handler to format selector to track state changes. - * - * @param settings - * A structure like Drupal.settigns.wysiwyg. - * @param context - * The context given from Drupal.attachBehaviors(). - */ -function updateInternalState(settings, context) { - var pluginData = settings.plugins; - for (var plugin in pluginData.drupal) { - if (!(plugin in _pluginInfoStorage.global.drupal)) { - _pluginInfoStorage.global.drupal[plugin] = pluginData.drupal[plugin]; - } - } - // To make sure we don't rely on Drupal.settings, uncomment these for testing. - //pluginData.drupal = {}; - for (var editorId in pluginData['native']) { - for (var plugin in pluginData['native'][editorId]) { - _pluginInfoStorage.global['native'][editorId] = (_pluginInfoStorage.global['native'][editorId] || {}); - if (!(plugin in _pluginInfoStorage.global['native'][editorId])) { - _pluginInfoStorage.global['native'][editorId][plugin] = pluginData['native'][editorId][plugin]; - } - } - } - //pluginData['native'] = {}; - for (var fmatId in pluginData) { - if (fmatId.substr(0, 6) !== 'format') { - continue; - } - _pluginInfoStorage[fmatId] = (_pluginInfoStorage[fmatId] || {'drupal': {}, 'native': {}}); - for (var plugin in pluginData[fmatId].drupal) { - if (!(plugin in _pluginInfoStorage[fmatId].drupal)) { - _pluginInfoStorage[fmatId].drupal[plugin] = pluginData[fmatId].drupal[plugin]; - } - } - for (var plugin in pluginData[fmatId]['native']) { - if (!(plugin in _pluginInfoStorage[fmatId]['native'])) { - _pluginInfoStorage[fmatId]['native'][plugin] = pluginData[fmatId]['native'][plugin]; - } - } - delete pluginData[fmatId]; - } - // Build the cache of format/profile settings. - for (var editor in settings.configs) { - if (!settings.configs.hasOwnProperty(editor)) { - continue; - } - for (var format in settings.configs[editor]) { - if (_formatInfoStorage[format] || !settings.configs[editor].hasOwnProperty(format)) { - continue; - } - _formatInfoStorage[format] = { - editor: editor, - toggle: true, // Overridden by triggers. - editorSettings: processObjectTypes(settings.configs[editor][format]) - }; - } - // Initialize editor libraries if not already done. - if (!_initializedLibraries[editor] && typeof Drupal.wysiwyg.editor.init[editor] === 'function') { - // Clone, so original settings are not overwritten. - Drupal.wysiwyg.editor.init[editor](jQuery.extend(true, {}, settings.configs[editor]), getPluginInfo('global:' + editor)); - _initializedLibraries[editor] = true; - } - // Update libraries, in case new plugins etc have not been initialized yet. - else if (typeof Drupal.wysiwyg.editor.update[editor] === 'function') { - Drupal.wysiwyg.editor.update[editor](jQuery.extend(true, {}, settings.configs[editor]), getPluginInfo('global:' + editor)); - } - } - //settings.configs = {}; - for (var triggerId in settings.triggers) { - var trigger = settings.triggers[triggerId]; - var fieldId = trigger.field; - var baseFieldId = (fieldId.indexOf('--') === -1 ? fieldId : fieldId.substr(0, fieldId.indexOf('--'))); - var fieldInfo = null; - if ($('#' + triggerId, context).length === 0) { - // Skip fields which may have been removed or are not in this context. - continue; - } - if (!(fieldInfo = _fieldInfoStorage[baseFieldId])) { - fieldInfo = _fieldInfoStorage[baseFieldId] = { - formats: {}, - select: trigger.select, - resizable: trigger.resizable, - summary: trigger.summary, - getFormatInfo: function () { - if (this.select) { - this.activeFormat = 'format' + $('#' + this.select + ':input').val(); - } - return getFormatInfo(this.activeFormat); - } - // 'activeFormat' and 'enabled' added below. - }; - } - for (var format in trigger) { - if (format.indexOf('format') != 0 || fieldInfo.formats[format]) { - continue; - } - fieldInfo.formats[format] = { - 'enabled': trigger[format].status - }; - if (!_formatInfoStorage[format]) { - _formatInfoStorage[format] = { - editor: trigger[format].editor, - editorSettings: {}, - getPluginInfo: function () { - return getPluginInfo(formatId); - } - }; - } - // Always update these since they are stored as state. - _formatInfoStorage[format].toggle = trigger[format].toggle; - if (trigger[format].skip_summary) { - fieldInfo.formats[format].skip_summary = true; - } - } - var $selectbox = null; - // Always update these since Drupal generates new ids on AJAX calls. - fieldInfo.summary = trigger.summary; - if (trigger.select) { - _selectToField[trigger.select.replace(/--\d+$/,'')] = trigger.field; - fieldInfo.select = trigger.select; - // Specifically target input elements in case selectbox wrappers have - // hidden the real element and cloned its attributes. - $selectbox = $('#' + trigger.select + ':input', context).filter('select'); - // Attach onChange handlers to input format selector elements. - $selectbox.unbind('change.wysiwyg').bind('change.wysiwyg', formatChanged); - } - // Always update the active format to ensure the righ profile is used if a - // field was removed and gets re-added and the instance was left behind. - fieldInfo.activeFormat = 'format' + ($selectbox ? $selectbox.val() : trigger.activeFormat); - fieldInfo.enabled = fieldInfo.formats[fieldInfo.activeFormat] && fieldInfo.formats[fieldInfo.activeFormat].enabled; - } - //settings.triggers = {}; -} - -/** - * Helper to prepare and attach an editor for a single field. - * - * Creates the 'instance' object under Drupal.wysiwyg.instances[fieldId]. - * - * @param mainFieldId - * The id of the field's main element, for fetching field info. - * @param context - * A DOM element, supplied by Drupal.attachBehaviors(). - * @param params - * An optional object for overriding state information for the editor with the - * following properties: - * - 'summary': Set to true to indicate to attach to the summary instead of - * the main element. Defaults to false. - * - 'forceDisabled': Set to true to override the current state of the field - * and assume it is disabled. Useful for hidden summary instances. - * - * @see Drupal.wysiwygAttach() - */ -function attachToField(mainFieldId, context, params) { - params = params || {}; - var fieldInfo = getFieldInfo(mainFieldId); - var fieldId = (params.summary ? fieldInfo.summary : mainFieldId); - var formatInfo = fieldInfo.getFormatInfo(); - // If the editor isn't active, attach default behaviors instead. - var enabled = (fieldInfo.enabled && !params.forceDisabled); - var editor = (enabled ? formatInfo.editor : 'none'); - // Settings are deep merged (cloned) to prevent editor implementations from - // permanently modifying them while attaching. - var clonedSettings = (enabled ? jQuery.extend(true, {}, formatInfo.editorSettings) : {}); - // (Re-)initialize field instance. - var stateParams = { - field: fieldId, - editor: formatInfo.editor, - 'status': enabled, - format: fieldInfo.activeFormat, - resizable: fieldInfo.resizable - }; - var internalInstance = new WysiwygInternalInstance(stateParams); - _internalInstances[fieldId] = internalInstance; - Drupal.wysiwyg.instances[fieldId] = internalInstance.publicInstance; - // Attach editor, if enabled by default or last state was enabled. - Drupal.wysiwyg.editor.attach[editor].call(internalInstance, context, stateParams, clonedSettings); -} - -/** - * Detach all editors from a target element. - * - * Ensures Drupal's original textfield resize functionality is restored if - * enabled and the triggering reason is 'unload'. - * - * Also detaches editors from the summary field, if available. - * - * @param context - * A DOM element, supplied by Drupal.detachBehaviors(). - * @param fieldId - * The id of an element to attach an editor to. - * @param trigger - * A string describing what is causing the editor to be detached. - * - 'serialize': The editor normally just syncs its contents to the original - * textarea for value serialization before an AJAX request. - * - 'unload': The editor is to be removed completely and the original - * textarea restored. - * - * @see Drupal.detachBehaviors() - */ -Drupal.wysiwygDetach = function (context, fieldId, trigger) { - var fieldInfo = getFieldInfo(fieldId), - trigger = trigger || 'unload'; - // Detach from main field. - detachFromField(fieldId, context, trigger); - if (trigger == 'unload') { - // Attach the resize behavior by forcing status to false. Other values are - // intentionally kept the same to show which editor is normally attached. - attachToField(fieldId, context, {forceDisabled: true}); - Drupal.wysiwygAttachToggleLink(context, fieldId); - } - // Detach from summary field. - if (fieldInfo.summary && _internalInstances[fieldInfo.summary]) { - // The "Edit summary" click handler could re-enable the editor by mistake. - $('#' + fieldId).parents('.text-format-wrapper').find('.link-edit-summary').unbind('click.wysiwyg'); - detachFromField(fieldId, context, trigger, {summary: true}); - if (trigger == 'unload') { - attachToField(fieldId, context, {summary: true}); - } - } -}; - -/** - * Helper to detach and clean up after an editor for a single field. - * - * Removes the 'instance' object under Drupal.wysiwyg.instances[fieldId]. - * - * @param mainFieldId - * The id of the field's main element, for fetching field info. - * @param context - * A DOM element, supplied by Drupal.detachBehaviors(). - * @param trigger - * A string describing what is causing the editor to be detached. - * - 'serialize': The editor normally just syncs its contents to the original - * textarea for value serialization before an AJAX request. - * - 'unload': The editor is to be removed completely and the original - * textarea restored. - * @param params - * An optional object for overriding state information for the editor with the - * following properties: - * - 'summary': Set to true to indicate to detach from the summary instead of - * the main element. Defaults to false. - * - * @see Drupal.wysiwygDetach() - */ -function detachFromField(mainFieldId, context, trigger, params) { - params = params || {}; - var fieldInfo = getFieldInfo(mainFieldId); - var fieldId = (params.summary ? fieldInfo.summary : mainFieldId); - var enabled = false; - var editor = 'none'; - if (_internalInstances[fieldId]) { - enabled = _internalInstances[fieldId]['status']; - editor = (enabled ? _internalInstances[fieldId].editor : 'none'); - } - var stateParams = { - field: fieldId, - 'status': enabled, - editor: fieldInfo.editor, - format: fieldInfo.activeFormat, - resizable: fieldInfo.resizable - }; - if (jQuery.isFunction(Drupal.wysiwyg.editor.detach[editor])) { - Drupal.wysiwyg.editor.detach[editor].call(_internalInstances[fieldId], context, stateParams, trigger); - } - if (trigger == 'unload') { - delete Drupal.wysiwyg.instances[fieldId]; - delete _internalInstances[fieldId]; - } -} - -/** - * Append or update an editor toggle link to a target element. - * - * @param context - * A DOM element, supplied by Drupal.attachBehaviors(). - * @param fieldId - * The id of an element to attach an editor to. - */ -Drupal.wysiwygAttachToggleLink = function(context, fieldId) { - var fieldInfo = getFieldInfo(fieldId), - editor = fieldInfo.getFormatInfo().editor; - if (!fieldInfo.getFormatInfo().toggle) { - // Otherwise, ensure that toggle link is hidden. - $('#wysiwyg-toggle-' + fieldId).hide(); - return; - } - if (!$('#wysiwyg-toggle-' + fieldId, context).length) { - var text = document.createTextNode(fieldInfo.enabled ? Drupal.settings.wysiwyg.disable : Drupal.settings.wysiwyg.enable), - a = document.createElement('a'), - div = document.createElement('div'); - $(a).attr({ id: 'wysiwyg-toggle-' + fieldId, href: 'javascript:void(0);' }).append(text); - $(div).addClass('wysiwyg-toggle-wrapper').append(a); - if ($('#' + fieldInfo.select).closest('.fieldset-wrapper').prepend(div).length == 0) { - // Fall back to inserting the link right after the field. - $('#' + fieldId).after(div); - }; - } - $('#wysiwyg-toggle-' + fieldId, context) - .html(fieldInfo.enabled ? Drupal.settings.wysiwyg.disable : Drupal.settings.wysiwyg.enable).show() - .unbind('click.wysiwyg') - .bind('click.wysiwyg', { 'fieldId': fieldId, 'context': context }, Drupal.wysiwyg.toggleWysiwyg); - - // Hide toggle link in case no editor is attached. - if (editor == 'none') { - $('#wysiwyg-toggle-' + fieldId).hide(); - } -}; - -/** - * Callback for the Enable/Disable rich editor link. - */ -Drupal.wysiwyg.toggleWysiwyg = function (event) { - var context = event.data.context, - fieldId = event.data.fieldId, - fieldInfo = getFieldInfo(fieldId); - // Toggling the enabled state indirectly toggles use of the 'none' editor. - if (fieldInfo.enabled) { - fieldInfo.enabled = false; - Drupal.wysiwygDetach(context, fieldId, 'unload'); - } - else { - fieldInfo.enabled = true; - Drupal.wysiwygAttach(context, fieldId); - } - fieldInfo.formats[fieldInfo.activeFormat].enabled = fieldInfo.enabled; -} - - -/** - * Event handler for when the selected format is changed. - */ -function formatChanged(event) { - var fieldId = _selectToField[this.id.replace(/--\d+$/, '')]; - var context = $(this).closest('form'); - var newFormat = 'format' + $(this).val(); - // Field state is fetched by reference. - var currentField = getFieldInfo(fieldId); - // Prevent double-attaching if change event is triggered manually. - if (newFormat === currentField.activeFormat) { - return; - } - // Save the state of the current format. - if (currentField.formats[currentField.activeFormat]) { - currentField.formats[currentField.activeFormat].enabled = currentField.enabled; - } - // Switch format/profile. - currentField.activeFormat = newFormat; - // Load the state from the new format. - if (currentField.formats[currentField.activeFormat]) { - currentField.enabled = currentField.formats[currentField.activeFormat].enabled; - } - else { - currentField.enabled = false; - } - // Attaching again will use the changed field state. - Drupal.wysiwygAttach(context, fieldId); -} - -/** - * Convert JSON type placeholders into the actual types. - * - * Recognizes function references (callbacks) and Regular Expressions. - * - * To create a callback, pass in an object with the following properties: - * - 'drupalWysiwygType': Must be set to 'callback'. - * - 'name': A string with the name of the callback, use - * 'object.subobject.method' syntax for methods in nested objects. - * - 'context': An optional string with the name of an object for overriding - * 'this' inside the function. Use 'object.subobject' syntax for nested - * objects. Defaults to the window object. - * - * To create a RegExp, pass in an object with the following properties: - * - 'drupalWysiwygType: Must be set to 'regexp'. - * - 'regexp': The Regular Expression as a string, without / wrappers. - * - 'modifiers': An optional string with modifiers to set on the RegExp object. - * - * @param json - * The json argument with all recognized type placeholders replaced by the real - * types. - * - * @return The JSON object with placeholder types replaced. - */ -function processObjectTypes(json) { - var out = null; - if (typeof json != 'object') { - return json; - } - out = new json.constructor(); - if (json.drupalWysiwygType) { - switch (json.drupalWysiwygType) { - case 'callback': - out = callbackWrapper(json.name, json.context); - break; - case 'regexp': - out = new RegExp(json.regexp, json.modifiers ? json.modifiers : undefined); - break; - default: - out.drupalWysiwygType = json.drupalWysiwygType; - } - } - else { - for (var i in json) { - if (json.hasOwnProperty(i) && json[i] && typeof json[i] == 'object') { - out[i] = processObjectTypes(json[i]); - } - else { - out[i] = json[i]; - } - } - } - return out; -} - -/** - * Convert function names into function references. - * - * @param name - * The name of a function to use as callback. Use the 'object.subobject.method' - * syntax for methods in nested objects. - * @param context - * An optional string with the name of an object for overriding 'this' inside - * the function. Use 'object.subobject' syntax for nested objects. Defaults to - * the window object. - * - * @return - * A function which will call the named function or method in the proper - * context, passing through arguments and return values. - */ -function callbackWrapper(name, context) { - var namespaces = name.split('.'), func = namespaces.pop(), obj = window; - for (var i = 0; obj && i < namespaces.length; i++) { - obj = obj[namespaces[i]]; - } - if (!obj) { - throw "Wysiwyg: Unable to locate callback " + namespaces.join('.') + "." + func + "()"; - } - if (!context) { - context = obj; - } - else if (typeof context == 'string'){ - namespaces = context.split('.'); - context = window; - for (i = 0; context && i < namespaces.length; i++) { - context = context[namespaces[i]]; - } - if (!context) { - throw "Wysiwyg: Unable to locate context object " + namespaces.join('.'); - } - } - if (typeof obj[func] != 'function') { - throw "Wysiwyg: " + func + " is not a callback function"; - } - return function () { - return obj[func].apply(context, arguments); - } -} - -var oldBeforeSerialize = (Drupal.ajax ? Drupal.ajax.prototype.beforeSerialize : false); -if (oldBeforeSerialize) { - /** - * Filter the ajax_html_ids list sent in AJAX requests. - * - * This overrides part of the form serializer to not include ids we know will - * not collide because editors are removed before those ids are reused. - * - * This avoids hitting like max_input_vars, which defaults to 1000, - * even with just a few active editor instances. - */ - Drupal.ajax.prototype.beforeSerialize = function (element, options) { - var ret = oldBeforeSerialize.call(this, element, options); - var excludeSelectors = []; - $.each(Drupal.wysiwyg.excludeIdSelectors, function () { - if ($.isArray(this)) { - excludeSelectors = excludeSelectors.concat(this); - } - }); - if (excludeSelectors.length > 0) { - var ajaxHtmlIdsArray = options.data['ajax_html_ids[]']; - if (!ajaxHtmlIdsArray || ajaxHtmlIdsArray.length === 0) { - return ret; - } - options.data['ajax_html_ids[]'] = []; - $('[id]:not(' + excludeSelectors.join(',') + ')').each(function () { - if ($.inArray(this.id, ajaxHtmlIdsArray) !== -1) { - options.data['ajax_html_ids[]'].push(this.id); - } - }); - } - return ret; - }; -} - -// Respond to CTools detach behaviors event. -$(document).unbind('CToolsDetachBehaviors.wysiwyg').bind('CToolsDetachBehaviors.wysiwyg', function(event, context) { - $('.wysiwyg:input', context).removeOnce('wysiwyg').each(function () { - Drupal.wysiwygDetach(context, this.id, 'unload'); - // The 'none' instances are destroyed with the dialog. - delete Drupal.wysiwyg.instances[this.id]; - delete _internalInstances[this.id]; - var baseFieldId = (this.id.indexOf('--') === -1 ? this.id : this.id.substr(0, this.id.indexOf('--'))); - delete _fieldInfoStorage[baseFieldId]; - }); -}); - -})(jQuery); - -;/*})'"*/ -;/*})'"*/ diff --git a/spaces/nt3awnou/Nt3awnou-rescue-map/README.md b/spaces/nt3awnou/Nt3awnou-rescue-map/README.md deleted file mode 100644 index f7e186f278462063591cf753390131b410452937..0000000000000000000000000000000000000000 --- a/spaces/nt3awnou/Nt3awnou-rescue-map/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nt3awnu Map -emoji: ❤️ -colorFrom: white -colorTo: white -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nyaasaT/Nyanator/README.md b/spaces/nyaasaT/Nyanator/README.md deleted file mode 100644 index 22f140e4c628f2b9dd275b215075a5c1a34fb290..0000000000000000000000000000000000000000 --- a/spaces/nyaasaT/Nyanator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nyanator -emoji: 💻 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nyx-ai/stylegan2-flax-tpu/fid/__init__.py b/spaces/nyx-ai/stylegan2-flax-tpu/fid/__init__.py deleted file mode 100644 index 3171bb2608c7f58d7ceb7bbe75e5bb1d6d173794..0000000000000000000000000000000000000000 --- a/spaces/nyx-ai/stylegan2-flax-tpu/fid/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .core import FID diff --git a/spaces/ofikodar/chatgpt-resume-builder/README.md b/spaces/ofikodar/chatgpt-resume-builder/README.md deleted file mode 100644 index e6801597d97d7cd78877e0349e6c94375f12f131..0000000000000000000000000000000000000000 --- a/spaces/ofikodar/chatgpt-resume-builder/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: ChatCV Resume Builder -emoji: 📊 -colorFrom: indigo -colorTo: indigo -sdk: docker -pinned: false ---- - - - - ChatCV - AI Resume Builder - - -

    ChatCV - AI Resume Builder

    -

    ChatCV is a streamlit web app that uses the OpenAI ChatGPT model to improve your resume. It takes in your existing resume in pdf format, processes it, and then sends it to a chatbot that acts as a recruiter. The chatbot will then provide suggestions for improvements to your resume, which you can then edit and save as a pdf file.

    -

    Please note that the processing time may take a few minutes, and it is possible for the chatbot to return some inaccuracies. It is recommended to double check the information before finalizing your improved resume.

    -

    To use ChatCV, you will need to have an OpenAI API Key, which can be obtained from https://platform.openai.com/account/api-keys. Simply enter your API Key into the app and you are ready to start using ChatCV.

    -

    For any issues, please report in the community section, or send an email to ofikodar@gmail.com.

    -

    Start using ChatCV and improve your chances of landing a job!

    - - \ No newline at end of file diff --git a/spaces/oguzakif/video-object-remover/SiamMask/models/siamrpn.py b/spaces/oguzakif/video-object-remover/SiamMask/models/siamrpn.py deleted file mode 100644 index ec9a50b4e4de0f0d8463988b55876b6034ba255e..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/models/siamrpn.py +++ /dev/null @@ -1,144 +0,0 @@ -# -------------------------------------------------------- -# SiamMask -# Licensed under The MIT License -# Written by Qiang Wang (wangqiang2015 at ia.ac.cn) -# -------------------------------------------------------- -import torch -import torch.nn as nn -import torch.nn.functional as F -from SiamMask.utils.bbox_helper import center2corner -from torch.autograd import Variable -from SiamMask.utils.anchors import Anchors - - -class SiamRPN(nn.Module): - def __init__(self, anchors=None): - super(SiamRPN, self).__init__() - self.anchors = anchors # anchor_cfg - self.anchor = Anchors(anchors) - self.anchor_num = self.anchor.anchor_num - self.features = None - self.rpn_model = None - - self.all_anchors = None - - def set_all_anchors(self, image_center, size): - # cx,cy,w,h - if not self.anchor.generate_all_anchors(image_center, size): - return - all_anchors = self.anchor.all_anchors[1] # cx, cy, w, h - self.all_anchors = torch.from_numpy(all_anchors).float().cuda() - self.all_anchors = [self.all_anchors[i] for i in range(4)] - - def feature_extractor(self, x): - return self.features(x) - - def rpn(self, template, search): - pred_cls, pred_loc = self.rpn_model(template, search) - return pred_cls, pred_loc - - def _add_rpn_loss(self, label_cls, label_loc, lable_loc_weight, rpn_pred_cls, - rpn_pred_loc): - ''' - :param compute_anchor_targets_fn: functions to produce anchors' learning targets. - :param rpn_pred_cls: [B, num_anchors * 2, h, w], output of rpn for classification. - :param rpn_pred_loc: [B, num_anchors * 4, h, w], output of rpn for localization. - :return: loss of classification and localization, respectively. - ''' - rpn_loss_cls = select_cross_entropy_loss(rpn_pred_cls, label_cls) - - rpn_loss_loc = weight_l1_loss(rpn_pred_loc, label_loc, lable_loc_weight) - - # classification accuracy, top1 - acc = torch.zeros(1) # TODO - return rpn_loss_cls, rpn_loss_loc, acc - - def run(self, template, search, softmax=False): - """ - run network - """ - template_feature = self.feature_extractor(template) - search_feature = self.feature_extractor(search) - rpn_pred_cls, rpn_pred_loc = self.rpn(template_feature, search_feature) - if softmax: - rpn_pred_cls = self.softmax(rpn_pred_cls) - return rpn_pred_cls, rpn_pred_loc, template_feature, search_feature - - def softmax(self, cls): - b, a2, h, w = cls.size() - cls = cls.view(b, 2, a2//2, h, w) - cls = cls.permute(0, 2, 3, 4, 1).contiguous() - cls = F.log_softmax(cls, dim=4) - return cls - - def forward(self, input): - """ - :param input: dict of input with keys of: - 'template': [b, 3, h1, w1], input template image. - 'search': [b, 3, h2, w2], input search image. - 'label_cls':[b, max_num_gts, 5] or None(self.training==False), - each gt contains x1,y1,x2,y2,class. - :return: dict of loss, predict, accuracy - """ - template = input['template'] - search = input['search'] - if self.training: - label_cls = input['label_cls'] - label_loc = input['label_loc'] - lable_loc_weight = input['label_loc_weight'] - - rpn_pred_cls, rpn_pred_loc, template_feature, search_feature = self.run(template, search, softmax=self.training) - - outputs = dict(predict=[], losses=[], accuracy=[]) - - outputs['predict'] = [rpn_pred_loc, rpn_pred_cls, template_feature, search_feature] - if self.training: - rpn_loss_cls, rpn_loss_loc, rpn_acc = self._add_rpn_loss(label_cls, label_loc, lable_loc_weight, - rpn_pred_cls, rpn_pred_loc) - outputs['losses'] = [rpn_loss_cls, rpn_loss_loc] - return outputs - - def template(self, z): - self.zf = self.feature_extractor(z) - cls_kernel, loc_kernel = self.rpn_model.template(self.zf) - return cls_kernel, loc_kernel - - def track(self, x, cls_kernel=None, loc_kernel=None, softmax=False): - xf = self.feature_extractor(x) - rpn_pred_cls, rpn_pred_loc = self.rpn_model.track(xf, cls_kernel, loc_kernel) - if softmax: - rpn_pred_cls = self.softmax(rpn_pred_cls) - return rpn_pred_cls, rpn_pred_loc - - -def get_cls_loss(pred, label, select): - if len(select.size()) == 0: return 0 - pred = torch.index_select(pred, 0, select) - label = torch.index_select(label, 0, select) - return F.nll_loss(pred, label) - - -def select_cross_entropy_loss(pred, label): - pred = pred.view(-1, 2) - label = label.view(-1) - pos = Variable(label.data.eq(1).nonzero().squeeze()).cuda() - neg = Variable(label.data.eq(0).nonzero().squeeze()).cuda() - - loss_pos = get_cls_loss(pred, label, pos) - loss_neg = get_cls_loss(pred, label, neg) - return loss_pos * 0.5 + loss_neg * 0.5 - - -def weight_l1_loss(pred_loc, label_loc, loss_weight): - """ - :param pred_loc: [b, 4k, h, w] - :param label_loc: [b, 4k, h, w] - :param loss_weight: [b, k, h, w] - :return: loc loss value - """ - b, _, sh, sw = pred_loc.size() - pred_loc = pred_loc.view(b, 4, -1, sh, sw) - diff = (pred_loc - label_loc).abs() - diff = diff.sum(dim=1).view(b, -1, sh, sw) - loss = diff * loss_weight - return loss.sum().div(b) diff --git a/spaces/openlamm/LAMM/model/PROCESS/multimodal_preprocessors.py b/spaces/openlamm/LAMM/model/PROCESS/multimodal_preprocessors.py deleted file mode 100644 index 44de961053601fd288c5c92c56b799d5762b8b4c..0000000000000000000000000000000000000000 --- a/spaces/openlamm/LAMM/model/PROCESS/multimodal_preprocessors.py +++ /dev/null @@ -1,687 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import gzip -import html -import io -import math -from functools import lru_cache -from typing import Callable, List, Optional - -import ftfy - -import numpy as np -import regex as re -import torch -import torch.nn as nn -from iopath.common.file_io import g_pathmgr -from timm.models.layers import trunc_normal_ - -from .helpers import cast_if_src_dtype, VerboseNNModule - - -def get_sinusoid_encoding_table(n_position, d_hid): - """Sinusoid position encoding table""" - - # TODO: make it with torch instead of numpy - def get_position_angle_vec(position): - return [ - position / np.power(10000, 2 * (hid_j // 2) / d_hid) - for hid_j in range(d_hid) - ] - - sinusoid_table = np.array( - [get_position_angle_vec(pos_i) for pos_i in range(n_position)] - ) - sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i - sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1 - - return torch.FloatTensor(sinusoid_table).unsqueeze(0) - - -def interpolate_pos_encoding_2d(target_spatial_size, pos_embed): - N = pos_embed.shape[1] - if N == target_spatial_size: - return pos_embed - dim = pos_embed.shape[-1] - # nn.functional.interpolate doesn't work with bfloat16 so we cast to float32 - pos_embed, updated = cast_if_src_dtype(pos_embed, torch.bfloat16, torch.float32) - pos_embed = nn.functional.interpolate( - pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute( - 0, 3, 1, 2 - ), - scale_factor=math.sqrt(target_spatial_size / N), - mode="bicubic", - ) - if updated: - pos_embed, _ = cast_if_src_dtype(pos_embed, torch.float32, torch.bfloat16) - pos_embed = pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) - return pos_embed - - -def interpolate_pos_encoding( - npatch_per_img, - pos_embed, - patches_layout, - input_shape=None, - first_patch_idx=1, -): - assert first_patch_idx == 0 or first_patch_idx == 1, "there is 1 CLS token or none" - N = pos_embed.shape[1] - first_patch_idx # since it's 1 if cls_token exists - if npatch_per_img == N: - return pos_embed - - assert ( - patches_layout[-1] == patches_layout[-2] - ), "Interpolation of pos embed not supported for non-square layouts" - - class_emb = pos_embed[:, :first_patch_idx] - pos_embed = pos_embed[:, first_patch_idx:] - - if input_shape is None or patches_layout[0] == 1: - # simple 2D pos embedding, no temporal component - pos_embed = interpolate_pos_encoding_2d(npatch_per_img, pos_embed) - elif patches_layout[0] > 1: - # pos embed has a temporal component - assert len(input_shape) == 4, "temporal interpolation not supported" - # we only support 2D interpolation in this case - num_frames = patches_layout[0] - num_spatial_tokens = patches_layout[1] * patches_layout[2] - pos_embed = pos_embed.view(1, num_frames, num_spatial_tokens, -1) - # interpolate embedding for zeroth frame - pos_embed = interpolate_pos_encoding_2d( - npatch_per_img, pos_embed[0, 0, ...].unsqueeze(0) - ) - else: - raise ValueError("This type of interpolation isn't implemented") - - return torch.cat((class_emb, pos_embed), dim=1) - - -def _get_pos_embedding( - npatch_per_img, - pos_embed, - patches_layout, - input_shape, - first_patch_idx=1, -): - pos_embed = interpolate_pos_encoding( - npatch_per_img, - pos_embed, - patches_layout, - input_shape=input_shape, - first_patch_idx=first_patch_idx, - ) - return pos_embed - - -class PatchEmbedGeneric(nn.Module): - """ - PatchEmbed from Hydra - """ - - def __init__(self, proj_stem, norm_layer: Optional[nn.Module] = None): - super().__init__() - - if len(proj_stem) > 1: - self.proj = nn.Sequential(*proj_stem) - else: - # Special case to be able to load pre-trained models that were - # trained with a standard stem - self.proj = proj_stem[0] - self.norm_layer = norm_layer - - def get_patch_layout(self, img_size): - with torch.no_grad(): - dummy_img = torch.zeros( - [ - 1, - ] - + img_size - ) - dummy_out = self.proj(dummy_img) - embed_dim = dummy_out.shape[1] - patches_layout = tuple(dummy_out.shape[2:]) - num_patches = np.prod(patches_layout) - return patches_layout, num_patches, embed_dim - - def forward(self, x): - x = self.proj(x) - # B C (T) H W -> B (T)HW C - x = x.flatten(2).transpose(1, 2) - if self.norm_layer is not None: - x = self.norm_layer(x) - return x - - -class SpatioTemporalPosEmbeddingHelper(VerboseNNModule): - def __init__( - self, - patches_layout: List, - num_patches: int, - num_cls_tokens: int, - embed_dim: int, - learnable: bool, - ) -> None: - super().__init__() - self.num_cls_tokens = num_cls_tokens - self.patches_layout = patches_layout - self.num_patches = num_patches - self.num_tokens = num_cls_tokens + num_patches - self.learnable = learnable - if self.learnable: - self.pos_embed = nn.Parameter(torch.zeros(1, self.num_tokens, embed_dim)) - trunc_normal_(self.pos_embed, std=0.02) - else: - self.register_buffer( - "pos_embed", get_sinusoid_encoding_table(self.num_tokens, embed_dim) - ) - - def get_pos_embedding(self, vision_input, all_vision_tokens): - input_shape = vision_input.shape - pos_embed = _get_pos_embedding( - all_vision_tokens.size(1) - self.num_cls_tokens, - pos_embed=self.pos_embed, - patches_layout=self.patches_layout, - input_shape=input_shape, - first_patch_idx=self.num_cls_tokens, - ) - return pos_embed - - -class RGBDTPreprocessor(VerboseNNModule): - def __init__( - self, - rgbt_stem: PatchEmbedGeneric, - depth_stem: PatchEmbedGeneric, - img_size: List = (3, 224, 224), - num_cls_tokens: int = 1, - pos_embed_fn: Callable = None, - use_type_embed: bool = False, - init_param_style: str = "openclip", - ) -> None: - super().__init__() - stem = rgbt_stem if rgbt_stem is not None else depth_stem - ( - self.patches_layout, - self.num_patches, - self.embed_dim, - ) = stem.get_patch_layout(img_size) - self.rgbt_stem = rgbt_stem - self.depth_stem = depth_stem - self.use_pos_embed = pos_embed_fn is not None - self.use_type_embed = use_type_embed - self.num_cls_tokens = num_cls_tokens - - if self.use_pos_embed: - self.pos_embedding_helper = pos_embed_fn( - patches_layout=self.patches_layout, - num_cls_tokens=num_cls_tokens, - num_patches=self.num_patches, - embed_dim=self.embed_dim, - ) - if self.num_cls_tokens > 0: - self.cls_token = nn.Parameter( - torch.zeros(1, self.num_cls_tokens, self.embed_dim) - ) - if self.use_type_embed: - self.type_embed = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) - - self.init_parameters(init_param_style) - - @torch.no_grad() - def init_parameters(self, init_param_style): - if init_param_style == "openclip": - # OpenCLIP style initialization - scale = self.embed_dim**-0.5 - if self.use_pos_embed: - nn.init.normal_(self.pos_embedding_helper.pos_embed) - self.pos_embedding_helper.pos_embed *= scale - - if self.num_cls_tokens > 0: - nn.init.normal_(self.cls_token) - self.cls_token *= scale - elif init_param_style == "vit": - self.cls_token.data.fill_(0) - else: - raise ValueError(f"Unknown init {init_param_style}") - - if self.use_type_embed: - nn.init.normal_(self.type_embed) - - def tokenize_input_and_cls_pos(self, input, stem, mask): - # tokens is of shape B x L x D - tokens = stem(input) - assert tokens.ndim == 3 - assert tokens.shape[2] == self.embed_dim - B = tokens.shape[0] - if self.num_cls_tokens > 0: - class_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole class_tokens impl from Phil Wang, thanks - tokens = torch.cat((class_tokens, tokens), dim=1) - if self.use_pos_embed: - pos_embed = self.pos_embedding_helper.get_pos_embedding(input, tokens) - tokens = tokens + pos_embed - if self.use_type_embed: - tokens = tokens + self.type_embed.expand(B, -1, -1) - return tokens - - def forward(self, vision=None, depth=None, patch_mask=None): - if patch_mask is not None: - raise NotImplementedError() - - if vision is not None: - vision_tokens = self.tokenize_input_and_cls_pos( - vision, self.rgbt_stem, patch_mask - ) - - if depth is not None: - depth_tokens = self.tokenize_input_and_cls_pos( - depth, self.depth_stem, patch_mask - ) - - # aggregate tokens - if vision is not None and depth is not None: - final_tokens = vision_tokens + depth_tokens - else: - final_tokens = vision_tokens if vision is not None else depth_tokens - return_dict = { - "trunk": { - "tokens": final_tokens, - }, - "head": {}, - } - return return_dict - - -class AudioPreprocessor(RGBDTPreprocessor): - def __init__(self, audio_stem: PatchEmbedGeneric, **kwargs) -> None: - super().__init__(rgbt_stem=audio_stem, depth_stem=None, **kwargs) - - def forward(self, audio=None): - return super().forward(vision=audio) - - -class ThermalPreprocessor(RGBDTPreprocessor): - def __init__(self, thermal_stem: PatchEmbedGeneric, **kwargs) -> None: - super().__init__(rgbt_stem=thermal_stem, depth_stem=None, **kwargs) - - def forward(self, thermal=None): - return super().forward(vision=thermal) - - -def build_causal_attention_mask(context_length): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(context_length, context_length, requires_grad=False) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - -class TextPreprocessor(VerboseNNModule): - def __init__( - self, - vocab_size: int, - context_length: int, - embed_dim: int, - causal_masking: bool, - supply_seq_len_to_head: bool = True, - num_cls_tokens: int = 0, - init_param_style: str = "openclip", - ) -> None: - super().__init__() - self.vocab_size = vocab_size - self.context_length = context_length - self.token_embedding = nn.Embedding(vocab_size, embed_dim) - self.pos_embed = nn.Parameter( - torch.empty(1, self.context_length + num_cls_tokens, embed_dim) - ) - self.causal_masking = causal_masking - if self.causal_masking: - mask = build_causal_attention_mask(self.context_length) - # register the mask as a buffer so it can be moved to the right device - self.register_buffer("mask", mask) - - self.supply_seq_len_to_head = supply_seq_len_to_head - self.num_cls_tokens = num_cls_tokens - self.embed_dim = embed_dim - if num_cls_tokens > 0: - assert self.causal_masking is False, "Masking + CLS token isn't implemented" - self.cls_token = nn.Parameter( - torch.zeros(1, self.num_cls_tokens, embed_dim) - ) - - self.init_parameters(init_param_style) - - @torch.no_grad() - def init_parameters(self, init_param_style="openclip"): - # OpenCLIP style initialization - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.pos_embed, std=0.01) - - if init_param_style == "openclip": - # OpenCLIP style initialization - scale = self.embed_dim**-0.5 - if self.num_cls_tokens > 0: - nn.init.normal_(self.cls_token) - self.cls_token *= scale - elif init_param_style == "vit": - self.cls_token.data.fill_(0) - else: - raise ValueError(f"Unknown init {init_param_style}") - - def forward(self, text): - # text tokens are of shape B x L x D - text_tokens = self.token_embedding(text) - # concat CLS tokens if any - if self.num_cls_tokens > 0: - B = text_tokens.shape[0] - class_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole class_tokens impl from Phil Wang, thanks - text_tokens = torch.cat((class_tokens, text_tokens), dim=1) - text_tokens = text_tokens + self.pos_embed - return_dict = { - "trunk": { - "tokens": text_tokens, - }, - "head": {}, - } - # Compute sequence length after adding CLS tokens - if self.supply_seq_len_to_head: - text_lengths = text.argmax(dim=-1) - return_dict["head"] = { - "seq_len": text_lengths, - } - if self.causal_masking: - return_dict["trunk"].update({"attn_mask": self.mask}) - return return_dict - - -class Im2Video(nn.Module): - """Convert an image into a trivial video.""" - - def __init__(self, time_dim=2): - super().__init__() - self.time_dim = time_dim - - def forward(self, x): - if x.ndim == 4: - # B, C, H, W -> B, C, T, H, W - return x.unsqueeze(self.time_dim) - elif x.ndim == 5: - return x - else: - raise ValueError(f"Dimension incorrect {x.shape}") - - -class PadIm2Video(Im2Video): - def __init__(self, ntimes, pad_type, time_dim=2): - super().__init__(time_dim=time_dim) - assert ntimes > 0 - assert pad_type in ["zero", "repeat"] - self.ntimes = ntimes - self.pad_type = pad_type - - def forward(self, x): - x = super().forward(x) - if x.shape[self.time_dim] == 1: - if self.pad_type == "repeat": - new_shape = [1] * len(x.shape) - new_shape[self.time_dim] = self.ntimes - x = x.repeat(new_shape) - elif self.pad_type == "zero": - padarg = [0, 0] * len(x.shape) - padarg[2 * self.time_dim + 1] = self.ntimes - x.shape[self.time_dim] - x = nn.functional.pad(x, padarg) - return x - - -# Modified from github.com/openai/CLIP -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) - + list(range(ord("¡"), ord("¬") + 1)) - + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r"\s+", " ", text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str, context_length=77): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - - with g_pathmgr.open(bpe_path, "rb") as fh: - bpe_bytes = io.BytesIO(fh.read()) - merges = gzip.open(bpe_bytes).read().decode("utf-8").split("\n") - merges = merges[1 : 49152 - 256 - 2 + 1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v + "" for v in vocab] - for merge in merges: - vocab.append("".join(merge)) - vocab.extend(["<|startoftext|>", "<|endoftext|>"]) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = { - "<|startoftext|>": "<|startoftext|>", - "<|endoftext|>": "<|endoftext|>", - } - self.pat = re.compile( - r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", - re.IGNORECASE, - ) - self.context_length = context_length - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + (token[-1] + "",) - pairs = get_pairs(word) - - if not pairs: - return token + "" - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) - bpe_tokens.extend( - self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ") - ) - return bpe_tokens - - def decode(self, tokens): - text = "".join([self.decoder[token] for token in tokens]) - text = ( - bytearray([self.byte_decoder[c] for c in text]) - .decode("utf-8", errors="replace") - .replace("", " ") - ) - return text - - def __call__(self, texts, context_length=None): - if not context_length: - context_length = self.context_length - - if isinstance(texts, str): - texts = [texts] - - sot_token = self.encoder["<|startoftext|>"] - eot_token = self.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + self.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - tokens = tokens[:context_length] - result[i, : len(tokens)] = torch.tensor(tokens) - - if len(result) == 1: - return result[0] - return result - - -class IMUPreprocessor(VerboseNNModule): - def __init__( - self, - kernel_size: int, - imu_stem: PatchEmbedGeneric, - embed_dim: int, - img_size: List = (6, 2000), - num_cls_tokens: int = 1, - pos_embed_fn: Callable = None, - init_param_style: str = "openclip", - ) -> None: - super().__init__() - stem = imu_stem - self.imu_stem = imu_stem - self.embed_dim = embed_dim - self.use_pos_embed = pos_embed_fn is not None - self.num_cls_tokens = num_cls_tokens - self.kernel_size = kernel_size - self.pos_embed = nn.Parameter( - torch.empty(1, (img_size[1] // kernel_size) + num_cls_tokens, embed_dim) - ) - - if self.num_cls_tokens > 0: - self.cls_token = nn.Parameter( - torch.zeros(1, self.num_cls_tokens, self.embed_dim) - ) - - self.init_parameters(init_param_style) - - @torch.no_grad() - def init_parameters(self, init_param_style): - nn.init.normal_(self.pos_embed, std=0.01) - - if init_param_style == "openclip": - # OpenCLIP style initialization - scale = self.embed_dim**-0.5 - - if self.num_cls_tokens > 0: - nn.init.normal_(self.cls_token) - self.cls_token *= scale - elif init_param_style == "vit": - self.cls_token.data.fill_(0) - else: - raise ValueError(f"Unknown init {init_param_style}") - - def tokenize_input_and_cls_pos(self, input, stem): - # tokens is of shape B x L x D - tokens = stem.norm_layer(stem.proj(input)) - assert tokens.ndim == 3 - assert tokens.shape[2] == self.embed_dim - B = tokens.shape[0] - if self.num_cls_tokens > 0: - class_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole class_tokens impl from Phil Wang, thanks - tokens = torch.cat((class_tokens, tokens), dim=1) - if self.use_pos_embed: - tokens = tokens + self.pos_embed - return tokens - - def forward(self, imu): - # Patchify - imu = imu.unfold( - -1, - self.kernel_size, - self.kernel_size, - ).permute(0, 2, 1, 3) - imu = imu.reshape(imu.size(0), imu.size(1), -1) - - imu_tokens = self.tokenize_input_and_cls_pos( - imu, - self.imu_stem, - ) - - return_dict = { - "trunk": { - "tokens": imu_tokens, - }, - "head": {}, - } - return return_dict diff --git a/spaces/osanseviero/draw-minimal-copy3/README.md b/spaces/osanseviero/draw-minimal-copy3/README.md deleted file mode 100644 index 02bd5a05da58c8ba7f81964f66572b5f2f39a09f..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/draw-minimal-copy3/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Quickdraw -emoji: 💻 -colorFrom: blue -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/pablo1n7/iberianGAN/utils/utils.py b/spaces/pablo1n7/iberianGAN/utils/utils.py deleted file mode 100644 index 5890e1b834319fe16de7f24444b7cf6abb0073d6..0000000000000000000000000000000000000000 --- a/spaces/pablo1n7/iberianGAN/utils/utils.py +++ /dev/null @@ -1,280 +0,0 @@ - -from torch import randperm, utils -from torch._utils import _accumulate -import numpy as np -import matplotlib.pyplot as plt -from matplotlib import offsetbox -from PIL import Image - -class Subset(utils.data.Dataset): - """ - Subset of a dataset at specified indices. - - Arguments: - dataset (Dataset): The whole Dataset - indices (sequence): Indices in the whole set selected for subset - """ - def __init__(self, dataset, indices): - self.dataset = dataset - self.indices = indices - - def __getitem__(self, idx): - return self.dataset[self.indices[idx]] - - def __len__(self): - return len(self.indices) - - - -def random_split(dataset, lengths): - """ - Randomly split a dataset into non-overlapping new datasets of given lengths. - - Arguments: - dataset (Dataset): Dataset to be split - lengths (sequence): lengths of splits to be produced - """ - if sum(lengths) != len(dataset): - raise ValueError("Sum of input lengths does not equal the length of the input dataset!") - - indices = randperm(sum(lengths)) - return [Subset(dataset, indices[offset - length:offset]) for offset, length in zip(_accumulate(lengths), lengths)] - -# Quizas deberia eliminar 3d o limpiar -def plot_embedding(X, merged, title = None, classes=11., showimage=True, distPl=0.006, onlyRoman=False): - x_min, x_max = np.min(X, 0), np.max(X, 0) - X = (X - x_min) / (x_max - x_min) - - plt.figure() - ax = plt.subplot(111) - ax.set_facecolor('xkcd:white') - """ - for i in range(X.shape[0]): - plt.text(X[i, 0], X[i, 1], str(merged.iloc[i][1]), - color=plt.cm.Set1(int(merged.iloc[i][1]) / float(classes)), - fontdict={'weight': 'bold', 'size': 9}) - """ - """ - for i in range(X.shape[0]): - if int(merged.iloc[i][1]) == 22: - plt.plot([X[i, 0]], [X[i, 1]], 'X', c="black", markersize=10) - plt.plot([X[i, 0]], [X[i, 1]], 'X', c='black', markersize=8) - else: - plt.plot([X[i, 0]], [X[i, 1]], 'o', c="black", markersize=6) - plt.plot([X[i, 0]], [X[i, 1]], 'o',c=plt.cm.Set3(int(merged.iloc[i][1])), markersize=4) - """ - - - if showimage and hasattr(offsetbox, 'AnnotationBbox'): - shown_images = np.array([[1., 1.]]) - for i in range(merged.shape[0]): - dist = np.sum((X[i] - shown_images) ** 2, 1) - - - if np.min(dist) < distPl: #6e-4: - # don't show points that are too close - continue - - shown_images = np.r_[shown_images, [X[i]]] - image = Image.open(merged.iloc[i][0]) - inverted_image = image #PIL.ImageOps.invert(image) - inverted_image.thumbnail((40, 40), Image.ANTIALIAS) - - props = dict(facecolor=plt.cm.Set3(int(merged.iloc[i][1])), alpha=1, lw=1) - imagebox = offsetbox.AnnotationBbox( - offsetbox.OffsetImage(inverted_image, cmap=plt.cm.gray), - X[i]+0.030, bboxprops=props) - ax.add_artist(imagebox) - plt.xticks([]), plt.yticks([]) - #cbar = plt.colorbar() - if title is not None: - plt.title(title) - - -import numpy as np -import cv2 -from skimage import measure -def landmarks(img_grey, N = 50): - thresh = 200 - ret,img = cv2.threshold(img_grey, thresh, 255, cv2.THRESH_BINARY) - if img.ndim == 2: - img_s = np.ones((img.shape[0] + 100, img.shape[0] + 100)) * 255 - img_s[50:-50, 50:-50] = img - img = img_s - contours = measure.find_contours(img, 0.5) - #fig = plt.figure(figsize=(7, 7)) - #ax = fig.add_subplot(111) - #ax.imshow(img, interpolation='nearest', cmap=plt.cm.gray) - - # for n, contour in enumerate(contours): - #print(contours[0].shape) - contour = contours[0] - #ax.plot(contour[:, 1], contour[:, 0], linewidth=5) - # resample_contour = contour[np.random.choice(contour.shape[0], 150, replace=False), :] - resample_contour = interpcurve(N, contour[:, 0], contour[:, 1]) - # print(resample_contour[:4, 0], resample_contour[:4, 1], resample_contour[:4].ravel()) - #df_semilandmarks.loc[index] = [id_name, classe_name] + list(resample_contour.ravel()) - #ax.plot(resample_contour[:, 1], resample_contour[:, 0], 'om', linewidth=5) - #plt.savefig('output/landmarked_'+id_name) - #plt.show() - return resample_contour - -def interpcurve(N, pX, pY): - #equally spaced in arclength - N = np.transpose(np.linspace(0, 1, N)) - #how many points will be uniformly interpolated? - nt = N.size - - #number of points on the curve - n = pX.size - pxy = np.array((pX, pY)).T - p1 = pxy[0,:] - pend = pxy[-1,:] - last_segment = np.linalg.norm(np.subtract(p1, pend)) - epsilon= 10 * np.finfo(float).eps - - #IF the two end points are not close enough lets close the curve - if last_segment > epsilon * np.linalg.norm(np.amax(abs(pxy), axis=0)): - pxy = np.vstack((pxy, p1)) - nt = nt + 1 - - pt = np.zeros((nt, 2)) - - #Compute the chordal arclength of each segment. - chordlen = (np.sum(np.diff(pxy, axis=0) ** 2, axis=1)) ** (1 / 2) - #Normalize the arclengths to a unit total - chordlen = chordlen / np.sum(chordlen) - #cumulative arclength - cumarc = np.append(0, np.cumsum(chordlen)) - - tbins= np.digitize(N, cumarc) # bin index in which each N is in - - #catch any problems at the ends - tbins[np.where(tbins<=0 | (N<=0))]=1 - tbins[np.where(tbins >= n | (N >= 1))] = n - 1 - - s = np.divide((N - cumarc[tbins]), chordlen[tbins-1]) - pt = pxy[tbins,:] + np.multiply((pxy[tbins,:] - pxy[tbins-1,:]), (np.vstack([s]*2)).T) - - return pt - - - -def segmentation(img, vertical): - img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - #plt.imshow(img, cmap=plt.cm.gray) - if vertical: - background_0 = np.ones((img.shape)) + 254 - background_0[img.shape[0]//2:, :] = img[img.shape[0]//2:,:] - #plt.imshow(background_0, cmap=plt.cm.gray) - #fig = plt.figure() - - background_1 = np.ones((img.shape)) + 254 - background_1[:img.shape[0]//2, :] = img[:img.shape[0]//2,:] - #plt.imshow(background_1, cmap=plt.cm.gray) - #fig = plt.figure() - - - else: - background_1 = np.ones((img.shape)) + 254 - background_1[:, :img.shape[0]//2] = img[:,:img.shape[0]//2] - #plt.imshow(background_0, cmap=plt.cm.gray) - #fig = plt.figure() - - - background_0 = np.ones((img.shape)) + 254 - background_0[:, img.shape[0]//2:] = img[:,img.shape[0]//2:] - #plt.imshow(background_1, cmap=plt.cm.gray) - #fig = plt.figure() - - return background_0, background_1 - -def plotTwoImages(img_1, img_2, title): - plt.figure(figsize=(12,5)) - plt.subplot(1, 2, 1) - - plt.imshow(img_1) - plt.title(title[0]) - plt.xticks([]) - plt.yticks([]) - plt.subplot(1, 2, 2) - - plt.imshow(img_2) - plt.xticks([]) - plt.yticks([]) - plt.title(title[1]) - -def plotLandmarks(landmarks_img_1_part_2, landmarks_img_2_part_2, title, comparateShow=True): - plt.figure(figsize=(12,5)) - size = (1, 2) - if comparateShow: - size = (1, 3) - plt.subplot( size[0], size[1], 1) - plt.plot(landmarks_img_1_part_2[:,0], landmarks_img_1_part_2[:,1], '-o') - plt.fill(landmarks_img_1_part_2[:, 0] , landmarks_img_1_part_2[:, 1], 'k') - plt.title(title[0]) - plt.xticks([]) - plt.yticks([]) - - - plt.subplot(size[0], size[1], 2) - plt.plot(landmarks_img_2_part_2[:,0], landmarks_img_2_part_2[:,1], '-o') - plt.fill(landmarks_img_2_part_2[:, 0] , landmarks_img_2_part_2[:, 1], 'k') - plt.title(title[1]) - plt.xticks([]) - plt.yticks([]) - - - if comparateShow: - plt.subplot(size[0], size[1], 3) - plt.plot(landmarks_img_1_part_2[:,0], landmarks_img_1_part_2[:,1], '-o') - plt.plot(landmarks_img_2_part_2[:,0], landmarks_img_2_part_2[:,1], '-o') - plt.title(title[2]) - plt.xticks([]) - plt.yticks([]) - -def plotLandmarks2(landmarks_img_1_part_2, landmarks_img_2_part_2, title, comparateShow=True): - plt.figure(figsize=(12,5)) - fig, (ax1) = plt.subplots(1, 1, figsize=(12,5)) - ax1.axis("off") - landmarks_img_1_part_2 = landmarks_img_1_part_2 * -1 - landmarks_img_2_part_2 = landmarks_img_2_part_2 * -1 - ax1.plot(landmarks_img_1_part_2[:,1], landmarks_img_1_part_2[:,0], '-o') - ax1.plot(landmarks_img_2_part_2[:,1], landmarks_img_2_part_2[:,0], '-o') - plt.title(title[2]) - ax1.set_aspect(2) - plt.xticks([]) - plt.yticks([]) - - -def plotLandmarksItem(fig, ax1, landmarks_img_1_part_2, landmarks_img_2_part_2, title, comparateShow=True): - ax1.axis("off") - landmarks_img_1_part_2 = landmarks_img_1_part_2 * -1 - landmarks_img_2_part_2 = landmarks_img_2_part_2 * -1 - ax1.plot(landmarks_img_1_part_2[:,1], landmarks_img_1_part_2[:,0], '-o') - ax1.plot(landmarks_img_2_part_2[:,1], landmarks_img_2_part_2[:,0], '-o') - ax1.set_title(title) - ax1.set_aspect(1) - - -def plotLandmarksALL(img_0, img_1, landmarks_img_1_part_1, landmarks_img_2_part_1, landmarks_img_1_part_2, landmarks_img_2_part_2, title): - plt.figure(figsize=(12,5)) - fig, (ax0, ax1, ax2, ax3) = plt.subplots(1, 4, figsize=(12,5)) - ax0.axis("off") - ax0.set_title('INPUT') - ax3.axis("off") - ax3.set_title('OUTPUT') - plt.xticks([]) - plt.yticks([]) - ax0.imshow(img_0) - plotLandmarksItem(fig, ax1, landmarks_img_1_part_1, landmarks_img_2_part_1, title[0]) - plotLandmarksItem(fig, ax2, landmarks_img_1_part_2, landmarks_img_2_part_2, title[1]) - ax3.imshow(img_1) - plt.xticks([]) - plt.yticks([]) - - - - -if __name__ == "__main__": - pass \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/imagic_stable_diffusion.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/imagic_stable_diffusion.py deleted file mode 100644 index 56bd381a9e65aa8edbe56cf7f22127c5c449b7ee..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/imagic_stable_diffusion.py +++ /dev/null @@ -1,496 +0,0 @@ -""" - modeled after the textual_inversion.py / train_dreambooth.py and the work - of justinpinkney here: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb -""" -import inspect -import warnings -from typing import List, Optional, Union - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -from accelerate import Accelerator - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers import DiffusionPipeline -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from diffusers.utils import logging - - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess(image): - w, h = image.size - w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -class ImagicStableDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for imagic image editing. - See paper here: https://arxiv.org/pdf/2210.09276.pdf - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offsensive or harmful. - Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - ): - super().__init__() - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def train( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image], - height: Optional[int] = 512, - width: Optional[int] = 512, - generator: Optional[torch.Generator] = None, - embedding_learning_rate: float = 0.001, - diffusion_model_learning_rate: float = 2e-6, - text_embedding_optimization_steps: int = 500, - model_fine_tuning_optimization_steps: int = 1000, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - accelerator = Accelerator( - gradient_accumulation_steps=1, - mixed_precision="fp16", - ) - - if "torch_device" in kwargs: - device = kwargs.pop("torch_device") - warnings.warn( - "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0." - " Consider using `pipe.to(torch_device)` instead." - ) - - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.to(device) - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - # Freeze vae and unet - self.vae.requires_grad_(False) - self.unet.requires_grad_(False) - self.text_encoder.requires_grad_(False) - self.unet.eval() - self.vae.eval() - self.text_encoder.eval() - - if accelerator.is_main_process: - accelerator.init_trackers( - "imagic", - config={ - "embedding_learning_rate": embedding_learning_rate, - "text_embedding_optimization_steps": text_embedding_optimization_steps, - }, - ) - - # get text embeddings for prompt - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = torch.nn.Parameter( - self.text_encoder(text_input.input_ids.to(self.device))[0], requires_grad=True - ) - text_embeddings = text_embeddings.detach() - text_embeddings.requires_grad_() - text_embeddings_orig = text_embeddings.clone() - - # Initialize the optimizer - optimizer = torch.optim.Adam( - [text_embeddings], # only optimize the embeddings - lr=embedding_learning_rate, - ) - - if isinstance(image, PIL.Image.Image): - image = preprocess(image) - - latents_dtype = text_embeddings.dtype - image = image.to(device=self.device, dtype=latents_dtype) - init_latent_image_dist = self.vae.encode(image).latent_dist - image_latents = init_latent_image_dist.sample(generator=generator) - image_latents = 0.18215 * image_latents - - progress_bar = tqdm(range(text_embedding_optimization_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - global_step = 0 - - logger.info("First optimizing the text embedding to better reconstruct the init image") - for _ in range(text_embedding_optimization_steps): - with accelerator.accumulate(text_embeddings): - # Sample noise that we'll add to the latents - noise = torch.randn(image_latents.shape).to(image_latents.device) - timesteps = torch.randint(1000, (1,), device=image_latents.device) - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps) - - # Predict the noise residual - noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample - - loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean() - accelerator.backward(loss) - - optimizer.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - accelerator.wait_for_everyone() - - text_embeddings.requires_grad_(False) - - # Now we fine tune the unet to better reconstruct the image - self.unet.requires_grad_(True) - self.unet.train() - optimizer = torch.optim.Adam( - self.unet.parameters(), # only optimize unet - lr=diffusion_model_learning_rate, - ) - progress_bar = tqdm(range(model_fine_tuning_optimization_steps), disable=not accelerator.is_local_main_process) - - logger.info("Next fine tuning the entire model to better reconstruct the init image") - for _ in range(model_fine_tuning_optimization_steps): - with accelerator.accumulate(self.unet.parameters()): - # Sample noise that we'll add to the latents - noise = torch.randn(image_latents.shape).to(image_latents.device) - timesteps = torch.randint(1000, (1,), device=image_latents.device) - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps) - - # Predict the noise residual - noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample - - loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean() - accelerator.backward(loss) - - optimizer.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - accelerator.wait_for_everyone() - self.text_embeddings_orig = text_embeddings_orig - self.text_embeddings = text_embeddings - - @torch.no_grad() - def __call__( - self, - alpha: float = 1.2, - height: Optional[int] = 512, - width: Optional[int] = 512, - num_inference_steps: Optional[int] = 50, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - guidance_scale: float = 7.5, - eta: float = 0.0, - ): - r""" - Function invoked when calling the pipeline for generation. - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - if self.text_embeddings is None: - raise ValueError("Please run the pipe.train() before trying to generate an image.") - if self.text_embeddings_orig is None: - raise ValueError("Please run the pipe.train() before trying to generate an image.") - - text_embeddings = alpha * self.text_embeddings_orig + (1 - alpha) * self.text_embeddings - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens = [""] - max_length = self.tokenizer.model_max_length - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.view(1, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # get the initial random noise unless the user supplied it - - # Unlike in other pipelines, latents need to be generated in the target device - # for 1-to-1 results reproducibility with the CompVis implementation. - # However this currently doesn't work in `mps`. - latents_shape = (1, self.unet.config.in_channels, height // 8, width // 8) - latents_dtype = text_embeddings.dtype - if self.device.type == "mps": - # randn does not exist on mps - latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to( - self.device - ) - else: - latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype) - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps) - - # Some schedulers like PNDM have timesteps as arrays - # It's more optimized to move all timesteps to correct device beforehand - timesteps_tensor = self.scheduler.timesteps.to(self.device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to( - self.device - ) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype) - ) - else: - has_nsfw_concept = None - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py deleted file mode 100644 index 364ed7e031898f02eee1ff4b353a840474b75162..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py +++ /dev/null @@ -1,936 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import logging -import math -import os -import shutil -from pathlib import Path - -import accelerate -import datasets -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.state import AcceleratorState -from accelerate.utils import ProjectConfiguration, set_seed -from datasets import load_dataset -from huggingface_hub import create_repo, upload_folder -from packaging import version -from PIL import Image -from tqdm import tqdm -from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection -from transformers.utils import ContextManagers - -import diffusers -from diffusers import AutoPipelineForText2Image, DDPMScheduler, UNet2DConditionModel, VQModel -from diffusers.optimization import get_scheduler -from diffusers.training_utils import EMAModel -from diffusers.utils import check_min_version, is_wandb_available, make_image_grid -from diffusers.utils.import_utils import is_xformers_available - - -if is_wandb_available(): - import wandb - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.21.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def save_model_card( - args, - repo_id: str, - images=None, - repo_folder=None, -): - img_str = "" - if len(images) > 0: - image_grid = make_image_grid(images, 1, len(args.validation_prompts)) - image_grid.save(os.path.join(repo_folder, "val_imgs_grid.png")) - img_str += "![val_imgs_grid](./val_imgs_grid.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {args.pretrained_decoder_model_name_or_path} -datasets: -- {args.dataset_name} -prior: -- {args.pretrained_prior_model_name_or_path} -tags: -- kandinsky -- text-to-image -- diffusers -inference: true ---- - """ - model_card = f""" -# Finetuning - {repo_id} - -This pipeline was finetuned from **{args.pretrained_decoder_model_name_or_path}** on the **{args.dataset_name}** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: {args.validation_prompts}: \n -{img_str} - -## Pipeline usage - -You can use the pipeline like so: - -```python -from diffusers import DiffusionPipeline -import torch - -pipeline = AutoPipelineForText2Image.from_pretrained("{repo_id}", torch_dtype=torch.float16) -prompt = "{args.validation_prompts[0]}" -image = pipeline(prompt).images[0] -image.save("my_image.png") -``` - -## Training info - -These are the key hyperparameters used during training: - -* Epochs: {args.num_train_epochs} -* Learning rate: {args.learning_rate} -* Batch size: {args.train_batch_size} -* Gradient accumulation steps: {args.gradient_accumulation_steps} -* Image resolution: {args.resolution} -* Mixed-precision: {args.mixed_precision} - -""" - wandb_info = "" - if is_wandb_available(): - wandb_run_url = None - if wandb.run is not None: - wandb_run_url = wandb.run.url - - if wandb_run_url is not None: - wandb_info = f""" -More information on all the CLI arguments and the environment are available on your [`wandb` run page]({wandb_run_url}). -""" - - model_card += wandb_info - - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def log_validation(vae, image_encoder, image_processor, unet, args, accelerator, weight_dtype, epoch): - logger.info("Running validation... ") - - pipeline = AutoPipelineForText2Image.from_pretrained( - args.pretrained_decoder_model_name_or_path, - vae=accelerator.unwrap_model(vae), - prior_image_encoder=accelerator.unwrap_model(image_encoder), - prior_image_processor=image_processor, - unet=accelerator.unwrap_model(unet), - torch_dtype=weight_dtype, - ) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - if args.enable_xformers_memory_efficient_attention: - pipeline.enable_xformers_memory_efficient_attention() - - if args.seed is None: - generator = None - else: - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - - images = [] - for i in range(len(args.validation_prompts)): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompts[i], num_inference_steps=20, generator=generator).images[0] - - images.append(image) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - elif tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompts[i]}") - for i, image in enumerate(images) - ] - } - ) - else: - logger.warn(f"image logging not implemented for {tracker.name}") - - del pipeline - torch.cuda.empty_cache() - - return images - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of finetuning Kandinsky 2.2.") - parser.add_argument( - "--pretrained_decoder_model_name_or_path", - type=str, - default="kandinsky-community/kandinsky-2-2-decoder", - required=False, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--pretrained_prior_model_name_or_path", - type=str, - default="kandinsky-community/kandinsky-2-2-prior", - required=False, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--image_column", type=str, default="image", help="The column of the dataset containing an image." - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--validation_prompts", - type=str, - default=None, - nargs="+", - help=("A set of prompts evaluated every `--validation_epochs` and logged to `--report_to`."), - ) - parser.add_argument( - "--output_dir", - type=str, - default="kandi_2_2-model-finetuned", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--train_batch_size", type=int, default=1, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="learning rate", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--snr_gamma", - type=float, - default=None, - help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. " - "More details here: https://arxiv.org/abs/2303.09556.", - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.") - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument( - "--adam_weight_decay", - type=float, - default=0.0, - required=False, - help="weight decay_to_use", - ) - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=("Max number of checkpoints to store."), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=5, - help="Run validation every X epochs.", - ) - parser.add_argument( - "--tracker_project_name", - type=str, - default="text2image-fine-tune", - help=( - "The `project_name` argument passed to Accelerator.init_trackers for" - " more information see https://huggingface.co/docs/accelerate/v0.17.0/en/package_reference/accelerator#accelerate.Accelerator" - ), - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - return args - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration( - total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir - ) - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="scheduler") - image_processor = CLIPImageProcessor.from_pretrained( - args.pretrained_prior_model_name_or_path, subfolder="image_processor" - ) - - def deepspeed_zero_init_disabled_context_manager(): - """ - returns either a context list that includes one that will disable zero.Init or an empty context list - """ - deepspeed_plugin = AcceleratorState().deepspeed_plugin if accelerate.state.is_initialized() else None - if deepspeed_plugin is None: - return [] - - return [deepspeed_plugin.zero3_init_context_manager(enable=False)] - - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - with ContextManagers(deepspeed_zero_init_disabled_context_manager()): - vae = VQModel.from_pretrained( - args.pretrained_decoder_model_name_or_path, subfolder="movq", torch_dtype=weight_dtype - ).eval() - image_encoder = CLIPVisionModelWithProjection.from_pretrained( - args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype - ).eval() - unet = UNet2DConditionModel.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="unet") - - # Freeze vae and image_encoder - vae.requires_grad_(False) - image_encoder.requires_grad_(False) - - # Create EMA for the unet. - if args.use_ema: - ema_unet = UNet2DConditionModel.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="unet") - ema_unet = EMAModel(ema_unet.parameters(), model_cls=UNet2DConditionModel, model_config=ema_unet.config) - ema_unet.to(accelerator.device) - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - def compute_snr(timesteps): - """ - Computes SNR as per https://github.com/TiankaiHang/Min-SNR-Diffusion-Training/blob/521b624bd70c67cee4bdf49225915f5945a872e3/guided_diffusion/gaussian_diffusion.py#L847-L849 - """ - alphas_cumprod = noise_scheduler.alphas_cumprod - sqrt_alphas_cumprod = alphas_cumprod**0.5 - sqrt_one_minus_alphas_cumprod = (1.0 - alphas_cumprod) ** 0.5 - - # Expand the tensors. - # Adapted from https://github.com/TiankaiHang/Min-SNR-Diffusion-Training/blob/521b624bd70c67cee4bdf49225915f5945a872e3/guided_diffusion/gaussian_diffusion.py#L1026 - sqrt_alphas_cumprod = sqrt_alphas_cumprod.to(device=timesteps.device)[timesteps].float() - while len(sqrt_alphas_cumprod.shape) < len(timesteps.shape): - sqrt_alphas_cumprod = sqrt_alphas_cumprod[..., None] - alpha = sqrt_alphas_cumprod.expand(timesteps.shape) - - sqrt_one_minus_alphas_cumprod = sqrt_one_minus_alphas_cumprod.to(device=timesteps.device)[timesteps].float() - while len(sqrt_one_minus_alphas_cumprod.shape) < len(timesteps.shape): - sqrt_one_minus_alphas_cumprod = sqrt_one_minus_alphas_cumprod[..., None] - sigma = sqrt_one_minus_alphas_cumprod.expand(timesteps.shape) - - # Compute SNR. - snr = (alpha / sigma) ** 2 - return snr - - # `accelerate` 0.16.0 will have better support for customized saving - if version.parse(accelerate.__version__) >= version.parse("0.16.0"): - # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format - def save_model_hook(models, weights, output_dir): - if args.use_ema: - ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema")) - - for i, model in enumerate(models): - model.save_pretrained(os.path.join(output_dir, "unet")) - - # make sure to pop weight so that corresponding model is not saved again - weights.pop() - - def load_model_hook(models, input_dir): - if args.use_ema: - load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel) - ema_unet.load_state_dict(load_model.state_dict()) - ema_unet.to(accelerator.device) - del load_model - - for i in range(len(models)): - # pop models so that they are not loaded again - model = models.pop() - - # load diffusers style into model - load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") - model.register_to_config(**load_model.config) - - model.load_state_dict(load_model.state_dict()) - del load_model - - accelerator.register_save_state_pre_hook(save_model_hook) - accelerator.register_load_state_pre_hook(load_model_hook) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - optimizer = optimizer_cls( - unet.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - image_column = args.image_column - if image_column not in column_names: - raise ValueError(f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}") - - def center_crop(image): - width, height = image.size - new_size = min(width, height) - left = (width - new_size) / 2 - top = (height - new_size) / 2 - right = (width + new_size) / 2 - bottom = (height + new_size) / 2 - return image.crop((left, top, right, bottom)) - - def train_transforms(img): - img = center_crop(img) - img = img.resize((args.resolution, args.resolution), resample=Image.BICUBIC, reducing_gap=1) - img = np.array(img).astype(np.float32) / 127.5 - 1 - img = torch.from_numpy(np.transpose(img, [2, 0, 1])) - return img - - def preprocess_train(examples): - images = [image.convert("RGB") for image in examples[image_column]] - examples["pixel_values"] = [train_transforms(image) for image in images] - examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values - return examples - - with accelerator.main_process_first(): - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - pixel_values = torch.stack([example["pixel_values"] for example in examples]) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - clip_pixel_values = torch.stack([example["clip_pixel_values"] for example in examples]) - clip_pixel_values = clip_pixel_values.to(memory_format=torch.contiguous_format).float() - return {"pixel_values": pixel_values, "clip_pixel_values": clip_pixel_values} - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - shuffle=True, - collate_fn=collate_fn, - batch_size=args.train_batch_size, - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - # Move image_encode and vae to gpu and cast to weight_dtype - image_encoder.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - tracker_config = dict(vars(args)) - tracker_config.pop("validation_prompts") - accelerator.init_trackers(args.tracker_project_name, tracker_config) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - images = batch["pixel_values"].to(weight_dtype) - clip_images = batch["clip_pixel_values"].to(weight_dtype) - latents = vae.encode(images).latents - image_embeds = image_encoder(clip_images).image_embeds - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - target = noise - - # Predict the noise residual and compute loss - added_cond_kwargs = {"image_embeds": image_embeds} - - model_pred = unet(noisy_latents, timesteps, None, added_cond_kwargs=added_cond_kwargs).sample[:, :4] - - if args.snr_gamma is None: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - else: - # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556. - # Since we predict the noise instead of x_0, the original formulation is slightly changed. - # This is discussed in Section 4.2 of the same paper. - snr = compute_snr(timesteps) - mse_loss_weights = ( - torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(dim=1)[0] / snr - ) - # We first calculate the original loss. Then we mean over the non-batch dimensions and - # rebalance the sample-wise losses with their respective loss weights. - # Finally, we take the mean of the rebalanced loss. - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none") - loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights - loss = loss.mean() - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() - train_loss += avg_loss.item() / args.gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - if args.use_ema: - ema_unet.step(unet.parameters()) - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` - if args.checkpoints_total_limit is not None: - checkpoints = os.listdir(args.output_dir) - checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] - checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) - - # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints - if len(checkpoints) >= args.checkpoints_total_limit: - num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 - removing_checkpoints = checkpoints[0:num_to_remove] - - logger.info( - f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" - ) - logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") - - for removing_checkpoint in removing_checkpoints: - removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) - shutil.rmtree(removing_checkpoint) - - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process: - if args.validation_prompts is not None and epoch % args.validation_epochs == 0: - if args.use_ema: - # Store the UNet parameters temporarily and load the EMA parameters to perform inference. - ema_unet.store(unet.parameters()) - ema_unet.copy_to(unet.parameters()) - log_validation( - vae, - image_encoder, - image_processor, - unet, - args, - accelerator, - weight_dtype, - global_step, - ) - if args.use_ema: - # Switch back to the original UNet parameters. - ema_unet.restore(unet.parameters()) - - # Create the pipeline using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = accelerator.unwrap_model(unet) - if args.use_ema: - ema_unet.copy_to(unet.parameters()) - - pipeline = AutoPipelineForText2Image.from_pretrained( - args.pretrained_decoder_model_name_or_path, - vae=vae, - unet=unet, - ) - pipeline.decoder_pipe.save_pretrained(args.output_dir) - - # Run a final round of inference. - images = [] - if args.validation_prompts is not None: - logger.info("Running inference for collecting generated images...") - pipeline = pipeline.to(accelerator.device) - pipeline.torch_dtype = weight_dtype - pipeline.set_progress_bar_config(disable=True) - pipeline.enable_model_cpu_offload() - - if args.enable_xformers_memory_efficient_attention: - pipeline.enable_xformers_memory_efficient_attention() - - if args.seed is None: - generator = None - else: - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - - for i in range(len(args.validation_prompts)): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompts[i], num_inference_steps=20, generator=generator).images[0] - images.append(image) - - if args.push_to_hub: - save_model_card(args, repo_id, images, repo_folder=args.output_dir) - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/passgenau-digital/virtual-chat-assistent-cc-energy/app.py b/spaces/passgenau-digital/virtual-chat-assistent-cc-energy/app.py deleted file mode 100644 index 2fd876e7e92d13c6a5428b15ed7fbcdd28e890d5..0000000000000000000000000000000000000000 --- a/spaces/passgenau-digital/virtual-chat-assistent-cc-energy/app.py +++ /dev/null @@ -1,186 +0,0 @@ -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter -from langchain.vectorstores import DocArrayInMemorySearch -from langchain.document_loaders import TextLoader -from langchain.chains import RetrievalQA, ConversationalRetrievalChain -from langchain.memory import ConversationBufferMemory -from langchain.chat_models import ChatOpenAI -from langchain.document_loaders import TextLoader -from langchain.document_loaders import PyPDFLoader -from langchain.document_loaders import DirectoryLoader - -from dotenv import load_dotenv, find_dotenv - -import panel as pn -import param -import os -import openai -import sys -import glob -sys.path.append('../..') - -pn.extension() -pn.config.template = 'bootstrap' - -_ = load_dotenv(find_dotenv()) # read local .env file - -openai.api_key = os.environ['OPENAI_API_KEY'] -llm_name = os.environ['LLM-MODEL'] - -class cbfs(param.Parameterized): - chat_history = param.List([]) - answer = param.String("") - db_query = param.String("") - db_response = param.List([]) - - def __init__(self, **params): - super(cbfs, self).__init__( **params) - self.panels = [] - self.loaded_file = "docs/B_2023_Abfertigung_Alt.pdf" - self.loaded_dir = "docs" - self.qa = load_db(self.loaded_dir,"stuff", 4) - - #def call_load_db(self, count): - # file_input.save("temp.pdf") # local copy - # self.loaded_file = file_input.filename - # button_load.button_style="outline" - # self.qa = load_db("temp.pdf", "stuff", 4) - # button_load.button_style="solid" - # self.clr_history() - # return pn.pane.Markdown(f"Loaded File: {self.loaded_file}") - - def get_pdfs(self): - if not self.loaded_dir: - print("no pdf directory set") - return - else: - docs_list = [] - docs = [f for f in sorted(os.listdir(self.loaded_dir)) if f.endswith(".pdf")] - print(f'documents: {docs}') - for doc in docs: - docs_list.append(pn.Row(pn.pane.Markdown(f'**{doc}**'))) - return pn.WidgetBox(*docs_list, width=1024, scroll=True) - - def convchain(self, query): - if not query: - return pn.WidgetBox(pn.Row('User:', pn.pane.Markdown("", width=1024)), scroll=True) - result = self.qa({"question": query, "chat_history": self.chat_history}) - self.chat_history.extend([(query, result["answer"])]) - self.db_query = result["generated_question"] - self.db_response = result["source_documents"] - self.answer = result['answer'] - self.panels.extend([ - pn.Row('User:', pn.pane.Markdown(query, width=1024)), - pn.Row('ChatBot:', pn.pane.Markdown(self.answer, width=1024, style={'background-color': '#F6F6F6'})) - ]) - inp.value = '' #clears loading indicator when cleared - return pn.WidgetBox(*self.panels,scroll=True) - - @param.depends('db_query ', ) - def get_lquest(self): - if not self.db_query : - return pn.Column( - pn.Row(pn.pane.Markdown(f"Last question to DB:", styles={'background-color': '#F6F6F6'})), - pn.Row(pn.pane.Str("no DB accesses so far")) - ) - return pn.Column( - pn.Row(pn.pane.Markdown(f"DB query:", styles={'background-color': '#F6F6F6'})), - pn.pane.Str(self.db_query ) - ) - - @param.depends('db_response', ) - def get_sources(self): - if not self.db_response: - return - rlist=[pn.Row(pn.pane.Markdown(f"Result of DB lookup:", styles={'background-color': '#F6F6F6'}))] - for doc in self.db_response: - print(doc) - rlist.append(pn.Row(pn.pane.Str(doc))) - return pn.WidgetBox(*rlist, width=1024, scroll=True) - - @param.depends('convchain', 'clr_history') - def get_chats(self): - if not self.chat_history: - return pn.WidgetBox(pn.Row(pn.pane.Str("No History Yet")), width=1024, scroll=True) - rlist=[pn.Row(pn.pane.Markdown(f"Current Chat History variable", styles={'background-color': '#F6F6F6'}))] - for exchange in self.chat_history: - rlist.append(pn.Row(pn.pane.Str(exchange))) - return pn.WidgetBox(*rlist, width=1024, scroll=True) - - def clr_history(self,count=0): - self.chat_history = [] - return - -def load_db(folder, chain_type, k): - # load documents - #loader = PyPDFLoader(file) - #loader = DirectoryLoader(folder, glob="**/*.pdf", show_progress=False, silent_errors=True) - loader = DirectoryLoader(folder) - documents = loader.load() - # split documents - text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150) - docs = text_splitter.split_documents(documents) - - - # define embedding - embeddings = OpenAIEmbeddings() - # create vector database from data - db = DocArrayInMemorySearch.from_documents(docs, embeddings) - # define retriever - retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k}) - # create a chatbot chain. Memory is managed externally. - qa = ConversationalRetrievalChain.from_llm( - llm=ChatOpenAI(model_name=llm_name, temperature=0), - chain_type=chain_type, - retriever=retriever, - return_source_documents=True, - return_generated_question=True, - ) - return qa - -cb = cbfs() - -#file_input = pn.widgets.FileInput(accept='.pdf') -#button_load = pn.widgets.Button(name="Load DB", button_type='primary') -button_clearhistory = pn.widgets.Button(name="Clear History", button_type='warning') -button_clearhistory.on_click(cb.clr_history) -inp = pn.widgets.TextInput( placeholder='Enter text here…') - -#bound_button_load = pn.bind(cb.call_load_db, button_load.param.clicks) -conversation = pn.bind(cb.convchain, inp) - -jpg_pane = pn.pane.Image( './passgenau-logo.png') - -tab1 = pn.Column( - pn.Row(inp), - pn.layout.Divider(), - pn.panel(conversation, loading_indicator=True, height=600), - pn.layout.Divider(), -) -tab2= pn.Column( - pn.panel(cb.get_lquest), - pn.layout.Divider(), - pn.panel(cb.get_sources ), -) -tab3= pn.Column( - pn.panel(cb.get_chats), - pn.layout.Divider(), -) -#tab4=pn.Column( -# pn.Row( file_input, button_load, bound_button_load), -# pn.Row( button_clearhistory, pn.pane.Markdown("Clears chat history. Can use to start a new topic" )), -# pn.layout.Divider(), -# pn.Row(jpg_pane.clone(width=400)) -#) -tab5=pn.Column( - pn.Row(pn.pane.Markdown('**Searchable Files:**')), - pn.panel(cb.get_pdfs()) -) -dashboard = pn.Column( - jpg_pane.clone(width=800), - pn.Row(pn.pane.Markdown('# passgenau.digital - Virtual Chat Assistent')), - pn.Row(pn.pane.HTML("https://passgenau.digital")), - pn.Tabs(('Conversation', tab1), ('Database', tab2), ('Chat History', tab3), ('PDF Documents', tab5))#,('Configure', tab4)) -) - -dashboard.servable() \ No newline at end of file diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/aws/resume.py b/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/aws/resume.py deleted file mode 100644 index b21731c979a121ab8227280351b70d6062efd983..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/aws/resume.py +++ /dev/null @@ -1,40 +0,0 @@ -# Resume all interrupted trainings in yolov5/ dir including DDP trainings -# Usage: $ python utils/aws/resume.py - -import os -import sys -from pathlib import Path - -import torch -import yaml - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[2] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -port = 0 # --master_port -path = Path('').resolve() -for last in path.rglob('*/**/last.pt'): - ckpt = torch.load(last) - if ckpt['optimizer'] is None: - continue - - # Load opt.yaml - with open(last.parent.parent / 'opt.yaml', errors='ignore') as f: - opt = yaml.safe_load(f) - - # Get device count - d = opt['device'].split(',') # devices - nd = len(d) # number of devices - ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel - - if ddp: # multi-GPU - port += 1 - cmd = f'python -m torch.distributed.run --nproc_per_node {nd} --master_port {port} train.py --resume {last}' - else: # single-GPU - cmd = f'python train.py --resume {last}' - - cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread - print(cmd) - os.system(cmd) diff --git a/spaces/pinkq/Newbing/src/app/layout.tsx b/spaces/pinkq/Newbing/src/app/layout.tsx deleted file mode 100644 index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000 --- a/spaces/pinkq/Newbing/src/app/layout.tsx +++ /dev/null @@ -1,47 +0,0 @@ -import { Metadata } from 'next' -import { Toaster } from 'react-hot-toast' -import { TailwindIndicator } from '@/components/tailwind-indicator' -import { Providers } from '@/components/providers' -import { Header } from '@/components/header' - -import '@/app/globals.scss' - - -export const metadata: Metadata = { - title: { - default: 'Bing AI Chatbot', - template: `%s - Bing AI Chatbot` - }, - description: 'Bing AI Chatbot Web App.', - themeColor: [ - { media: '(prefers-color-scheme: light)', color: 'white' }, - { media: '(prefers-color-scheme: dark)', color: 'dark' } - ], - icons: { - icon: '/favicon.ico', - shortcut: '../assets/images/logo.svg', - apple: '../assets/images/logo.svg' - } -} - -interface RootLayoutProps { - children: React.ReactNode -} - -export default function RootLayout({ children }: RootLayoutProps) { - return ( - - - - -
    - {/* @ts-ignore */} -
    -
    {children}
    -
    - -
    - - - ) -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/eucjpprober.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/eucjpprober.py deleted file mode 100644 index 39487f4098d7c2068b67d7d3dd85b61848974a23..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/eucjpprober.py +++ /dev/null @@ -1,102 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import Union - -from .chardistribution import EUCJPDistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .enums import MachineState, ProbingState -from .jpcntx import EUCJPContextAnalysis -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import EUCJP_SM_MODEL - - -class EUCJPProber(MultiByteCharSetProber): - def __init__(self) -> None: - super().__init__() - self.coding_sm = CodingStateMachine(EUCJP_SM_MODEL) - self.distribution_analyzer = EUCJPDistributionAnalysis() - self.context_analyzer = EUCJPContextAnalysis() - self.reset() - - def reset(self) -> None: - super().reset() - self.context_analyzer.reset() - - @property - def charset_name(self) -> str: - return "EUC-JP" - - @property - def language(self) -> str: - return "Japanese" - - def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState: - assert self.coding_sm is not None - assert self.distribution_analyzer is not None - - for i, byte in enumerate(byte_str): - # PY3K: byte_str is a byte array, so byte is an int, not a byte - coding_state = self.coding_sm.next_state(byte) - if coding_state == MachineState.ERROR: - self.logger.debug( - "%s %s prober hit error at byte %s", - self.charset_name, - self.language, - i, - ) - self._state = ProbingState.NOT_ME - break - if coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - break - if coding_state == MachineState.START: - char_len = self.coding_sm.get_current_charlen() - if i == 0: - self._last_char[1] = byte - self.context_analyzer.feed(self._last_char, char_len) - self.distribution_analyzer.feed(self._last_char, char_len) - else: - self.context_analyzer.feed(byte_str[i - 1 : i + 1], char_len) - self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len) - - self._last_char[0] = byte_str[-1] - - if self.state == ProbingState.DETECTING: - if self.context_analyzer.got_enough_data() and ( - self.get_confidence() > self.SHORTCUT_THRESHOLD - ): - self._state = ProbingState.FOUND_IT - - return self.state - - def get_confidence(self) -> float: - assert self.distribution_analyzer is not None - - context_conf = self.context_analyzer.get_confidence() - distrib_conf = self.distribution_analyzer.get_confidence() - return max(context_conf, distrib_conf) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/file_proxy.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/file_proxy.py deleted file mode 100644 index 4b0b0da6c2a62b2b1468c35ddd69f1bbb9b91aa8..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/file_proxy.py +++ /dev/null @@ -1,57 +0,0 @@ -import io -from typing import IO, TYPE_CHECKING, Any, List - -from .ansi import AnsiDecoder -from .text import Text - -if TYPE_CHECKING: - from .console import Console - - -class FileProxy(io.TextIOBase): - """Wraps a file (e.g. sys.stdout) and redirects writes to a console.""" - - def __init__(self, console: "Console", file: IO[str]) -> None: - self.__console = console - self.__file = file - self.__buffer: List[str] = [] - self.__ansi_decoder = AnsiDecoder() - - @property - def rich_proxied_file(self) -> IO[str]: - """Get proxied file.""" - return self.__file - - def __getattr__(self, name: str) -> Any: - return getattr(self.__file, name) - - def write(self, text: str) -> int: - if not isinstance(text, str): - raise TypeError(f"write() argument must be str, not {type(text).__name__}") - buffer = self.__buffer - lines: List[str] = [] - while text: - line, new_line, text = text.partition("\n") - if new_line: - lines.append("".join(buffer) + line) - buffer.clear() - else: - buffer.append(line) - break - if lines: - console = self.__console - with console: - output = Text("\n").join( - self.__ansi_decoder.decode_line(line) for line in lines - ) - console.print(output) - return len(text) - - def flush(self) -> None: - output = "".join(self.__buffer) - if output: - self.__console.print(output) - del self.__buffer[:] - - def fileno(self) -> int: - return self.__file.fileno() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/tree.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/tree.py deleted file mode 100644 index afe8da1a4a30daf6e48ffba514656e7c86c9abaa..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/tree.py +++ /dev/null @@ -1,251 +0,0 @@ -from typing import Iterator, List, Optional, Tuple - -from ._loop import loop_first, loop_last -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .jupyter import JupyterMixin -from .measure import Measurement -from .segment import Segment -from .style import Style, StyleStack, StyleType -from .styled import Styled - - -class Tree(JupyterMixin): - """A renderable for a tree structure. - - Args: - label (RenderableType): The renderable or str for the tree label. - style (StyleType, optional): Style of this tree. Defaults to "tree". - guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line". - expanded (bool, optional): Also display children. Defaults to True. - highlight (bool, optional): Highlight renderable (if str). Defaults to False. - """ - - def __init__( - self, - label: RenderableType, - *, - style: StyleType = "tree", - guide_style: StyleType = "tree.line", - expanded: bool = True, - highlight: bool = False, - hide_root: bool = False, - ) -> None: - self.label = label - self.style = style - self.guide_style = guide_style - self.children: List[Tree] = [] - self.expanded = expanded - self.highlight = highlight - self.hide_root = hide_root - - def add( - self, - label: RenderableType, - *, - style: Optional[StyleType] = None, - guide_style: Optional[StyleType] = None, - expanded: bool = True, - highlight: Optional[bool] = False, - ) -> "Tree": - """Add a child tree. - - Args: - label (RenderableType): The renderable or str for the tree label. - style (StyleType, optional): Style of this tree. Defaults to "tree". - guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line". - expanded (bool, optional): Also display children. Defaults to True. - highlight (Optional[bool], optional): Highlight renderable (if str). Defaults to False. - - Returns: - Tree: A new child Tree, which may be further modified. - """ - node = Tree( - label, - style=self.style if style is None else style, - guide_style=self.guide_style if guide_style is None else guide_style, - expanded=expanded, - highlight=self.highlight if highlight is None else highlight, - ) - self.children.append(node) - return node - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - - stack: List[Iterator[Tuple[bool, Tree]]] = [] - pop = stack.pop - push = stack.append - new_line = Segment.line() - - get_style = console.get_style - null_style = Style.null() - guide_style = get_style(self.guide_style, default="") or null_style - SPACE, CONTINUE, FORK, END = range(4) - - ASCII_GUIDES = (" ", "| ", "+-- ", "`-- ") - TREE_GUIDES = [ - (" ", "│ ", "├── ", "└── "), - (" ", "┃ ", "┣━━ ", "┗━━ "), - (" ", "║ ", "╠══ ", "╚══ "), - ] - _Segment = Segment - - def make_guide(index: int, style: Style) -> Segment: - """Make a Segment for a level of the guide lines.""" - if options.ascii_only: - line = ASCII_GUIDES[index] - else: - guide = 1 if style.bold else (2 if style.underline2 else 0) - line = TREE_GUIDES[0 if options.legacy_windows else guide][index] - return _Segment(line, style) - - levels: List[Segment] = [make_guide(CONTINUE, guide_style)] - push(iter(loop_last([self]))) - - guide_style_stack = StyleStack(get_style(self.guide_style)) - style_stack = StyleStack(get_style(self.style)) - remove_guide_styles = Style(bold=False, underline2=False) - - depth = 0 - - while stack: - stack_node = pop() - try: - last, node = next(stack_node) - except StopIteration: - levels.pop() - if levels: - guide_style = levels[-1].style or null_style - levels[-1] = make_guide(FORK, guide_style) - guide_style_stack.pop() - style_stack.pop() - continue - push(stack_node) - if last: - levels[-1] = make_guide(END, levels[-1].style or null_style) - - guide_style = guide_style_stack.current + get_style(node.guide_style) - style = style_stack.current + get_style(node.style) - prefix = levels[(2 if self.hide_root else 1) :] - renderable_lines = console.render_lines( - Styled(node.label, style), - options.update( - width=options.max_width - - sum(level.cell_length for level in prefix), - highlight=self.highlight, - height=None, - ), - pad=options.justify is not None, - ) - - if not (depth == 0 and self.hide_root): - for first, line in loop_first(renderable_lines): - if prefix: - yield from _Segment.apply_style( - prefix, - style.background_style, - post_style=remove_guide_styles, - ) - yield from line - yield new_line - if first and prefix: - prefix[-1] = make_guide( - SPACE if last else CONTINUE, prefix[-1].style or null_style - ) - - if node.expanded and node.children: - levels[-1] = make_guide( - SPACE if last else CONTINUE, levels[-1].style or null_style - ) - levels.append( - make_guide(END if len(node.children) == 1 else FORK, guide_style) - ) - style_stack.push(get_style(node.style)) - guide_style_stack.push(get_style(node.guide_style)) - push(iter(loop_last(node.children))) - depth += 1 - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - stack: List[Iterator[Tree]] = [iter([self])] - pop = stack.pop - push = stack.append - minimum = 0 - maximum = 0 - measure = Measurement.get - level = 0 - while stack: - iter_tree = pop() - try: - tree = next(iter_tree) - except StopIteration: - level -= 1 - continue - push(iter_tree) - min_measure, max_measure = measure(console, options, tree.label) - indent = level * 4 - minimum = max(min_measure + indent, minimum) - maximum = max(max_measure + indent, maximum) - if tree.expanded and tree.children: - push(iter(tree.children)) - level += 1 - return Measurement(minimum, maximum) - - -if __name__ == "__main__": # pragma: no cover - - from pip._vendor.rich.console import Group - from pip._vendor.rich.markdown import Markdown - from pip._vendor.rich.panel import Panel - from pip._vendor.rich.syntax import Syntax - from pip._vendor.rich.table import Table - - table = Table(row_styles=["", "dim"]) - - table.add_column("Released", style="cyan", no_wrap=True) - table.add_column("Title", style="magenta") - table.add_column("Box Office", justify="right", style="green") - - table.add_row("Dec 20, 2019", "Star Wars: The Rise of Skywalker", "$952,110,690") - table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347") - table.add_row("Dec 15, 2017", "Star Wars Ep. V111: The Last Jedi", "$1,332,539,889") - table.add_row("Dec 16, 2016", "Rogue One: A Star Wars Story", "$1,332,439,889") - - code = """\ -class Segment(NamedTuple): - text: str = "" - style: Optional[Style] = None - is_control: bool = False -""" - syntax = Syntax(code, "python", theme="monokai", line_numbers=True) - - markdown = Markdown( - """\ -### example.md -> Hello, World! -> -> Markdown _all_ the things -""" - ) - - root = Tree("🌲 [b green]Rich Tree", highlight=True, hide_root=True) - - node = root.add(":file_folder: Renderables", guide_style="red") - simple_node = node.add(":file_folder: [bold yellow]Atomic", guide_style="uu green") - simple_node.add(Group("📄 Syntax", syntax)) - simple_node.add(Group("📄 Markdown", Panel(markdown, border_style="green"))) - - containers_node = node.add( - ":file_folder: [bold magenta]Containers", guide_style="bold magenta" - ) - containers_node.expanded = True - panel = Panel.fit("Just a panel", border_style="red") - containers_node.add(Group("📄 Panels", panel)) - - containers_node.add(Group("📄 [b magenta]Table", table)) - - console = Console() - - console.print(root) diff --git a/spaces/platzi/platzi-curso-streamlit-butterfly-gan/README.md b/spaces/platzi/platzi-curso-streamlit-butterfly-gan/README.md deleted file mode 100644 index a9d0ec5cc22f48dcb1b4da00095a314cf3eeccbd..0000000000000000000000000000000000000000 --- a/spaces/platzi/platzi-curso-streamlit-butterfly-gan/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Butterfly GAN -emoji: 🦋 -colorFrom: blue -colorTo: yellow -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ppsingh/cpu-demo/utils/preprocessing.py b/spaces/ppsingh/cpu-demo/utils/preprocessing.py deleted file mode 100644 index ee905e76551279b48bbd94de9cb6334f4aaeb29c..0000000000000000000000000000000000000000 --- a/spaces/ppsingh/cpu-demo/utils/preprocessing.py +++ /dev/null @@ -1,275 +0,0 @@ -from haystack.nodes.base import BaseComponent -from haystack.schema import Document -from haystack.nodes import PDFToTextOCRConverter, PDFToTextConverter -from haystack.nodes import TextConverter, DocxToTextConverter, PreProcessor -from typing import Callable, Dict, List, Optional, Text, Tuple, Union -from typing_extensions import Literal -import pandas as pd -import logging -import re -import string -from haystack.pipelines import Pipeline - -def useOCR(file_path: str)-> Text: - """ - Converts image pdfs into text, Using the Farm-haystack[OCR] - - Params - ---------- - file_path: file_path of uploade file, returned by add_upload function in - uploadAndExample.py - - Returns the text file as string. - """ - - - converter = PDFToTextOCRConverter(remove_numeric_tables=True, - valid_languages=["eng"]) - docs = converter.convert(file_path=file_path, meta=None) - return docs[0].content - - - - -class FileConverter(BaseComponent): - """ - Wrapper class to convert uploaded document into text by calling appropriate - Converter class, will use internally haystack PDFToTextOCR in case of image - pdf. Cannot use the FileClassifier from haystack as its doesnt has any - label/output class for image. - - 1. https://haystack.deepset.ai/pipeline_nodes/custom-nodes - 2. https://docs.haystack.deepset.ai/docs/file_converters - 3. https://github.com/deepset-ai/haystack/tree/main/haystack/nodes/file_converter - 4. https://docs.haystack.deepset.ai/reference/file-converters-api - - - """ - - outgoing_edges = 1 - - def run(self, file_name: str , file_path: str, encoding: Optional[str]=None, - id_hash_keys: Optional[List[str]] = None, - ) -> Tuple[dict,str]: - """ this is required method to invoke the component in - the pipeline implementation. - - Params - ---------- - file_name: name of file - file_path: file_path of uploade file, returned by add_upload function in - uploadAndExample.py - - See the links provided in Class docstring/description to see other params - - Return - --------- - output: dictionary, with key as identifier and value could be anything - we need to return. In this case its the List of Hasyatck Document - - output_1: As there is only one outgoing edge, we pass 'output_1' string - """ - try: - if file_name.endswith('.pdf'): - converter = PDFToTextConverter(remove_numeric_tables=True) - if file_name.endswith('.txt'): - converter = TextConverter(remove_numeric_tables=True) - if file_name.endswith('.docx'): - converter = DocxToTextConverter() - except Exception as e: - logging.error(e) - return - - - - documents = [] - - -# encoding is empty, probably should be utf-8 - document = converter.convert( - file_path=file_path, meta=None, - encoding=encoding, id_hash_keys=id_hash_keys - )[0] - - text = document.content - - # in case of scanned/images only PDF the content might contain only - # the page separator (\f or \x0c). We check if is so and use - # use the OCR to get the text. - filtered = re.sub(r'\x0c', '', text) - - if filtered == "": - logging.info("Using OCR") - text = useOCR(file_path) - - documents.append(Document(content=text, - meta={"name": file_name}, - id_hash_keys=id_hash_keys)) - - logging.info('file conversion succesful') - output = {'documents': documents} - return output, 'output_1' - - def run_batch(): - """ - we dont have requirement to process the multiple files in one go - therefore nothing here, however to use the custom node we need to have - this method for the class. - """ - - return - - -def basic(s:str, remove_punc:bool = False): - - """ - Performs basic cleaning of text. - - Params - ---------- - s: string to be processed - removePunc: to remove all Punctuation including ',' and '.' or not - - Returns: processed string: see comments in the source code for more info - """ - - # Remove URLs - s = re.sub(r'^https?:\/\/.*[\r\n]*', ' ', s, flags=re.MULTILINE) - s = re.sub(r"http\S+", " ", s) - - # Remove new line characters - s = re.sub('\n', ' ', s) - - # Remove punctuations - if remove_punc == True: - translator = str.maketrans(' ', ' ', string.punctuation) - s = s.translate(translator) - # Remove distracting single quotes and dotted pattern - s = re.sub("\'", " ", s) - s = s.replace("..","") - - return s.strip() - -def paraLengthCheck(paraList, max_len = 512): - new_para_list = [] - for passage in paraList: - if len(passage.split()) > max_len: - iterations = int(len(passage.split())/max_len) - # # st.write("Splitting") - for i in range(iterations): - temp = " ".join(passage.split()[max_len*i:max_len*(i+1)]) - new_para_list.append(temp) - temp = " ".join(passage.split()[max_len*(i+1):]) - new_para_list.append(temp) - else: - new_para_list.append(passage) - - return new_para_list - -class UdfPreProcessor(BaseComponent): - """ - class to preprocess the document returned by FileConverter. It will check - for splitting strategy and splits the document by word or sentences and then - synthetically create the paragraphs. - - 1. https://docs.haystack.deepset.ai/docs/preprocessor - 2. https://docs.haystack.deepset.ai/reference/preprocessor-api - 3. https://github.com/deepset-ai/haystack/tree/main/haystack/nodes/preprocessor - - """ - outgoing_edges = 1 - - def run(self, documents:List[Document], remove_punc:bool=False, - split_by: Literal["sentence", "word"] = 'sentence', - split_length:int = 2, split_respect_sentence_boundary:bool = False, - split_overlap:int = 0): - - """ this is required method to invoke the component in - the pipeline implementation. - - Params - ---------- - documents: documents from the output dictionary returned by Fileconverter - remove_punc: to remove all Punctuation including ',' and '.' or not - split_by: document splitting strategy either as word or sentence - split_length: when synthetically creating the paragrpahs from document, - it defines the length of paragraph. - split_respect_sentence_boundary: Used when using 'word' strategy for - splititng of text. - split_overlap: Number of words or sentences that overlap when creating - the paragraphs. This is done as one sentence or 'some words' make sense - when read in together with others. Therefore the overlap is used. - - Return - --------- - output: dictionary, with key as identifier and value could be anything - we need to return. In this case the output will contain 4 objects - the paragraphs text list as List, Haystack document, Dataframe and - one raw text file. - - output_1: As there is only one outgoing edge, we pass 'output_1' string - - """ - - if split_by == 'sentence': - split_respect_sentence_boundary = False - - else: - split_respect_sentence_boundary = split_respect_sentence_boundary - - preprocessor = PreProcessor( - clean_empty_lines=True, - clean_whitespace=True, - clean_header_footer=True, - split_by=split_by, - split_length=split_length, - split_respect_sentence_boundary= split_respect_sentence_boundary, - split_overlap=split_overlap, - - # will add page number only in case of PDF not for text/docx file. - add_page_number=True - ) - - for i in documents: - # # basic cleaning before passing it to preprocessor. - # i = basic(i) - docs_processed = preprocessor.process([i]) - for item in docs_processed: - item.content = basic(item.content, remove_punc= remove_punc) - - df = pd.DataFrame(docs_processed) - all_text = " ".join(df.content.to_list()) - para_list = df.content.to_list() - logging.info('document split into {} paragraphs'.format(len(para_list))) - output = {'documents': docs_processed, - 'dataframe': df, - 'text': all_text, - 'paraList': para_list - } - return output, "output_1" - def run_batch(): - """ - we dont have requirement to process the multiple files in one go - therefore nothing here, however to use the custom node we need to have - this method for the class. - """ - return - -def processingpipeline(): - """ - Returns the preprocessing pipeline. Will use FileConverter and UdfPreProcesor - from utils.preprocessing - - """ - - preprocessing_pipeline = Pipeline() - file_converter = FileConverter() - custom_preprocessor = UdfPreProcessor() - - preprocessing_pipeline.add_node(component=file_converter, - name="FileConverter", inputs=["File"]) - preprocessing_pipeline.add_node(component = custom_preprocessor, - name ='UdfPreProcessor', inputs=["FileConverter"]) - - return preprocessing_pipeline - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/cu2qu/ufo.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/cu2qu/ufo.py deleted file mode 100644 index 10367cfecf8384e32eace3b9d0e01ab6c588c324..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/cu2qu/ufo.py +++ /dev/null @@ -1,349 +0,0 @@ -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -"""Converts cubic bezier curves to quadratic splines. - -Conversion is performed such that the quadratic splines keep the same end-curve -tangents as the original cubics. The approach is iterative, increasing the -number of segments for a spline until the error gets below a bound. - -Respective curves from multiple fonts will be converted at once to ensure that -the resulting splines are interpolation-compatible. -""" - -import logging -from fontTools.pens.basePen import AbstractPen -from fontTools.pens.pointPen import PointToSegmentPen -from fontTools.pens.reverseContourPen import ReverseContourPen - -from . import curves_to_quadratic -from .errors import ( - UnequalZipLengthsError, - IncompatibleSegmentNumberError, - IncompatibleSegmentTypesError, - IncompatibleGlyphsError, - IncompatibleFontsError, -) - - -__all__ = ["fonts_to_quadratic", "font_to_quadratic"] - -# The default approximation error below is a relative value (1/1000 of the EM square). -# Later on, we convert it to absolute font units by multiplying it by a font's UPEM -# (see fonts_to_quadratic). -DEFAULT_MAX_ERR = 0.001 -CURVE_TYPE_LIB_KEY = "com.github.googlei18n.cu2qu.curve_type" - -logger = logging.getLogger(__name__) - - -_zip = zip - - -def zip(*args): - """Ensure each argument to zip has the same length. Also make sure a list is - returned for python 2/3 compatibility. - """ - - if len(set(len(a) for a in args)) != 1: - raise UnequalZipLengthsError(*args) - return list(_zip(*args)) - - -class GetSegmentsPen(AbstractPen): - """Pen to collect segments into lists of points for conversion. - - Curves always include their initial on-curve point, so some points are - duplicated between segments. - """ - - def __init__(self): - self._last_pt = None - self.segments = [] - - def _add_segment(self, tag, *args): - if tag in ["move", "line", "qcurve", "curve"]: - self._last_pt = args[-1] - self.segments.append((tag, args)) - - def moveTo(self, pt): - self._add_segment("move", pt) - - def lineTo(self, pt): - self._add_segment("line", pt) - - def qCurveTo(self, *points): - self._add_segment("qcurve", self._last_pt, *points) - - def curveTo(self, *points): - self._add_segment("curve", self._last_pt, *points) - - def closePath(self): - self._add_segment("close") - - def endPath(self): - self._add_segment("end") - - def addComponent(self, glyphName, transformation): - pass - - -def _get_segments(glyph): - """Get a glyph's segments as extracted by GetSegmentsPen.""" - - pen = GetSegmentsPen() - # glyph.draw(pen) - # We can't simply draw the glyph with the pen, but we must initialize the - # PointToSegmentPen explicitly with outputImpliedClosingLine=True. - # By default PointToSegmentPen does not outputImpliedClosingLine -- unless - # last and first point on closed contour are duplicated. Because we are - # converting multiple glyphs at the same time, we want to make sure - # this function returns the same number of segments, whether or not - # the last and first point overlap. - # https://github.com/googlefonts/fontmake/issues/572 - # https://github.com/fonttools/fonttools/pull/1720 - pointPen = PointToSegmentPen(pen, outputImpliedClosingLine=True) - glyph.drawPoints(pointPen) - return pen.segments - - -def _set_segments(glyph, segments, reverse_direction): - """Draw segments as extracted by GetSegmentsPen back to a glyph.""" - - glyph.clearContours() - pen = glyph.getPen() - if reverse_direction: - pen = ReverseContourPen(pen) - for tag, args in segments: - if tag == "move": - pen.moveTo(*args) - elif tag == "line": - pen.lineTo(*args) - elif tag == "curve": - pen.curveTo(*args[1:]) - elif tag == "qcurve": - pen.qCurveTo(*args[1:]) - elif tag == "close": - pen.closePath() - elif tag == "end": - pen.endPath() - else: - raise AssertionError('Unhandled segment type "%s"' % tag) - - -def _segments_to_quadratic(segments, max_err, stats, all_quadratic=True): - """Return quadratic approximations of cubic segments.""" - - assert all(s[0] == "curve" for s in segments), "Non-cubic given to convert" - - new_points = curves_to_quadratic([s[1] for s in segments], max_err, all_quadratic) - n = len(new_points[0]) - assert all(len(s) == n for s in new_points[1:]), "Converted incompatibly" - - spline_length = str(n - 2) - stats[spline_length] = stats.get(spline_length, 0) + 1 - - if all_quadratic or n == 3: - return [("qcurve", p) for p in new_points] - else: - return [("curve", p) for p in new_points] - - -def _glyphs_to_quadratic(glyphs, max_err, reverse_direction, stats, all_quadratic=True): - """Do the actual conversion of a set of compatible glyphs, after arguments - have been set up. - - Return True if the glyphs were modified, else return False. - """ - - try: - segments_by_location = zip(*[_get_segments(g) for g in glyphs]) - except UnequalZipLengthsError: - raise IncompatibleSegmentNumberError(glyphs) - if not any(segments_by_location): - return False - - # always modify input glyphs if reverse_direction is True - glyphs_modified = reverse_direction - - new_segments_by_location = [] - incompatible = {} - for i, segments in enumerate(segments_by_location): - tag = segments[0][0] - if not all(s[0] == tag for s in segments[1:]): - incompatible[i] = [s[0] for s in segments] - elif tag == "curve": - new_segments = _segments_to_quadratic( - segments, max_err, stats, all_quadratic - ) - if all_quadratic or new_segments != segments: - glyphs_modified = True - segments = new_segments - new_segments_by_location.append(segments) - - if glyphs_modified: - new_segments_by_glyph = zip(*new_segments_by_location) - for glyph, new_segments in zip(glyphs, new_segments_by_glyph): - _set_segments(glyph, new_segments, reverse_direction) - - if incompatible: - raise IncompatibleSegmentTypesError(glyphs, segments=incompatible) - return glyphs_modified - - -def glyphs_to_quadratic( - glyphs, max_err=None, reverse_direction=False, stats=None, all_quadratic=True -): - """Convert the curves of a set of compatible of glyphs to quadratic. - - All curves will be converted to quadratic at once, ensuring interpolation - compatibility. If this is not required, calling glyphs_to_quadratic with one - glyph at a time may yield slightly more optimized results. - - Return True if glyphs were modified, else return False. - - Raises IncompatibleGlyphsError if glyphs have non-interpolatable outlines. - """ - if stats is None: - stats = {} - - if not max_err: - # assume 1000 is the default UPEM - max_err = DEFAULT_MAX_ERR * 1000 - - if isinstance(max_err, (list, tuple)): - max_errors = max_err - else: - max_errors = [max_err] * len(glyphs) - assert len(max_errors) == len(glyphs) - - return _glyphs_to_quadratic( - glyphs, max_errors, reverse_direction, stats, all_quadratic - ) - - -def fonts_to_quadratic( - fonts, - max_err_em=None, - max_err=None, - reverse_direction=False, - stats=None, - dump_stats=False, - remember_curve_type=True, - all_quadratic=True, -): - """Convert the curves of a collection of fonts to quadratic. - - All curves will be converted to quadratic at once, ensuring interpolation - compatibility. If this is not required, calling fonts_to_quadratic with one - font at a time may yield slightly more optimized results. - - Return True if fonts were modified, else return False. - - By default, cu2qu stores the curve type in the fonts' lib, under a private - key "com.github.googlei18n.cu2qu.curve_type", and will not try to convert - them again if the curve type is already set to "quadratic". - Setting 'remember_curve_type' to False disables this optimization. - - Raises IncompatibleFontsError if same-named glyphs from different fonts - have non-interpolatable outlines. - """ - - if remember_curve_type: - curve_types = {f.lib.get(CURVE_TYPE_LIB_KEY, "cubic") for f in fonts} - if len(curve_types) == 1: - curve_type = next(iter(curve_types)) - if curve_type in ("quadratic", "mixed"): - logger.info("Curves already converted to quadratic") - return False - elif curve_type == "cubic": - pass # keep converting - else: - raise NotImplementedError(curve_type) - elif len(curve_types) > 1: - # going to crash later if they do differ - logger.warning("fonts may contain different curve types") - - if stats is None: - stats = {} - - if max_err_em and max_err: - raise TypeError("Only one of max_err and max_err_em can be specified.") - if not (max_err_em or max_err): - max_err_em = DEFAULT_MAX_ERR - - if isinstance(max_err, (list, tuple)): - assert len(max_err) == len(fonts) - max_errors = max_err - elif max_err: - max_errors = [max_err] * len(fonts) - - if isinstance(max_err_em, (list, tuple)): - assert len(fonts) == len(max_err_em) - max_errors = [f.info.unitsPerEm * e for f, e in zip(fonts, max_err_em)] - elif max_err_em: - max_errors = [f.info.unitsPerEm * max_err_em for f in fonts] - - modified = False - glyph_errors = {} - for name in set().union(*(f.keys() for f in fonts)): - glyphs = [] - cur_max_errors = [] - for font, error in zip(fonts, max_errors): - if name in font: - glyphs.append(font[name]) - cur_max_errors.append(error) - try: - modified |= _glyphs_to_quadratic( - glyphs, cur_max_errors, reverse_direction, stats, all_quadratic - ) - except IncompatibleGlyphsError as exc: - logger.error(exc) - glyph_errors[name] = exc - - if glyph_errors: - raise IncompatibleFontsError(glyph_errors) - - if modified and dump_stats: - spline_lengths = sorted(stats.keys()) - logger.info( - "New spline lengths: %s" - % (", ".join("%s: %d" % (l, stats[l]) for l in spline_lengths)) - ) - - if remember_curve_type: - for font in fonts: - curve_type = font.lib.get(CURVE_TYPE_LIB_KEY, "cubic") - new_curve_type = "quadratic" if all_quadratic else "mixed" - if curve_type != new_curve_type: - font.lib[CURVE_TYPE_LIB_KEY] = new_curve_type - modified = True - return modified - - -def glyph_to_quadratic(glyph, **kwargs): - """Convenience wrapper around glyphs_to_quadratic, for just one glyph. - Return True if the glyph was modified, else return False. - """ - - return glyphs_to_quadratic([glyph], **kwargs) - - -def font_to_quadratic(font, **kwargs): - """Convenience wrapper around fonts_to_quadratic, for just one font. - Return True if the font was modified, else return False. - """ - - return fonts_to_quadratic([font], **kwargs) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/L_T_S_H_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/L_T_S_H_.py deleted file mode 100644 index e0ab0d021c47cf79e51cad326806e12ff97c9e00..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/L_T_S_H_.py +++ /dev/null @@ -1,48 +0,0 @@ -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import struct -import array - -# XXX I've lowered the strictness, to make sure Apple's own Chicago -# XXX gets through. They're looking into it, I hope to raise the standards -# XXX back to normal eventually. - - -class table_L_T_S_H_(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - version, numGlyphs = struct.unpack(">HH", data[:4]) - data = data[4:] - assert version == 0, "unknown version: %s" % version - assert (len(data) % numGlyphs) < 4, "numGlyphs doesn't match data length" - # ouch: the assertion is not true in Chicago! - # assert numGlyphs == ttFont['maxp'].numGlyphs - yPels = array.array("B") - yPels.frombytes(data) - self.yPels = {} - for i in range(numGlyphs): - self.yPels[ttFont.getGlyphName(i)] = yPels[i] - - def compile(self, ttFont): - version = 0 - names = list(self.yPels.keys()) - numGlyphs = len(names) - yPels = [0] * numGlyphs - # ouch: the assertion is not true in Chicago! - # assert len(self.yPels) == ttFont['maxp'].numGlyphs == numGlyphs - for name in names: - yPels[ttFont.getGlyphID(name)] = self.yPels[name] - yPels = array.array("B", yPels) - return struct.pack(">HH", version, numGlyphs) + yPels.tobytes() - - def toXML(self, writer, ttFont): - names = sorted(self.yPels.keys()) - for name in names: - writer.simpletag("yPel", name=name, value=self.yPels[name]) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if not hasattr(self, "yPels"): - self.yPels = {} - if name != "yPel": - return # ignore unknown tags - self.yPels[attrs["name"]] = safeEval(attrs["value"]) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_abnf.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_abnf.py deleted file mode 100644 index 933587fba22290d7eb7df4c88e12f1e61702b8ce..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_abnf.py +++ /dev/null @@ -1,132 +0,0 @@ -# We use native strings for all the re patterns, to take advantage of string -# formatting, and then convert to bytestrings when compiling the final re -# objects. - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#whitespace -# OWS = *( SP / HTAB ) -# ; optional whitespace -OWS = r"[ \t]*" - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#rule.token.separators -# token = 1*tchar -# -# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" -# / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" -# / DIGIT / ALPHA -# ; any VCHAR, except delimiters -token = r"[-!#$%&'*+.^_`|~0-9a-zA-Z]+" - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#header.fields -# field-name = token -field_name = token - -# The standard says: -# -# field-value = *( field-content / obs-fold ) -# field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ] -# field-vchar = VCHAR / obs-text -# obs-fold = CRLF 1*( SP / HTAB ) -# ; obsolete line folding -# ; see Section 3.2.4 -# -# https://tools.ietf.org/html/rfc5234#appendix-B.1 -# -# VCHAR = %x21-7E -# ; visible (printing) characters -# -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#rule.quoted-string -# obs-text = %x80-FF -# -# However, the standard definition of field-content is WRONG! It disallows -# fields containing a single visible character surrounded by whitespace, -# e.g. "foo a bar". -# -# See: https://www.rfc-editor.org/errata_search.php?rfc=7230&eid=4189 -# -# So our definition of field_content attempts to fix it up... -# -# Also, we allow lots of control characters, because apparently people assume -# that they're legal in practice (e.g., google analytics makes cookies with -# \x01 in them!): -# https://github.com/python-hyper/h11/issues/57 -# We still don't allow NUL or whitespace, because those are often treated as -# meta-characters and letting them through can lead to nasty issues like SSRF. -vchar = r"[\x21-\x7e]" -vchar_or_obs_text = r"[^\x00\s]" -field_vchar = vchar_or_obs_text -field_content = r"{field_vchar}+(?:[ \t]+{field_vchar}+)*".format(**globals()) - -# We handle obs-fold at a different level, and our fixed-up field_content -# already grows to swallow the whole value, so ? instead of * -field_value = r"({field_content})?".format(**globals()) - -# header-field = field-name ":" OWS field-value OWS -header_field = ( - r"(?P{field_name})" - r":" - r"{OWS}" - r"(?P{field_value})" - r"{OWS}".format(**globals()) -) - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#request.line -# -# request-line = method SP request-target SP HTTP-version CRLF -# method = token -# HTTP-version = HTTP-name "/" DIGIT "." DIGIT -# HTTP-name = %x48.54.54.50 ; "HTTP", case-sensitive -# -# request-target is complicated (see RFC 7230 sec 5.3) -- could be path, full -# URL, host+port (for connect), or even "*", but in any case we are guaranteed -# that it contists of the visible printing characters. -method = token -request_target = r"{vchar}+".format(**globals()) -http_version = r"HTTP/(?P[0-9]\.[0-9])" -request_line = ( - r"(?P{method})" - r" " - r"(?P{request_target})" - r" " - r"{http_version}".format(**globals()) -) - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#status.line -# -# status-line = HTTP-version SP status-code SP reason-phrase CRLF -# status-code = 3DIGIT -# reason-phrase = *( HTAB / SP / VCHAR / obs-text ) -status_code = r"[0-9]{3}" -reason_phrase = r"([ \t]|{vchar_or_obs_text})*".format(**globals()) -status_line = ( - r"{http_version}" - r" " - r"(?P{status_code})" - # However, there are apparently a few too many servers out there that just - # leave out the reason phrase: - # https://github.com/scrapy/scrapy/issues/345#issuecomment-281756036 - # https://github.com/seanmonstar/httparse/issues/29 - # so make it optional. ?: is a non-capturing group. - r"(?: (?P{reason_phrase}))?".format(**globals()) -) - -HEXDIG = r"[0-9A-Fa-f]" -# Actually -# -# chunk-size = 1*HEXDIG -# -# but we impose an upper-limit to avoid ridiculosity. len(str(2**64)) == 20 -chunk_size = r"({HEXDIG}){{1,20}}".format(**globals()) -# Actually -# -# chunk-ext = *( ";" chunk-ext-name [ "=" chunk-ext-val ] ) -# -# but we aren't parsing the things so we don't really care. -chunk_ext = r";.*" -chunk_header = ( - r"(?P{chunk_size})" - r"(?P{chunk_ext})?" - r"{OWS}\r\n".format( - **globals() - ) # Even though the specification does not allow for extra whitespaces, - # we are lenient with trailing whitespaces because some servers on the wild use it. -) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_trace.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_trace.py deleted file mode 100644 index b122a53e88f17e1e450f63b05ede3e28e8a7992a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_trace.py +++ /dev/null @@ -1,105 +0,0 @@ -import inspect -import logging -from types import TracebackType -from typing import Any, Dict, Optional, Type - -from ._models import Request - - -class Trace: - def __init__( - self, - name: str, - logger: logging.Logger, - request: Optional[Request] = None, - kwargs: Optional[Dict[str, Any]] = None, - ) -> None: - self.name = name - self.logger = logger - self.trace_extension = ( - None if request is None else request.extensions.get("trace") - ) - self.debug = self.logger.isEnabledFor(logging.DEBUG) - self.kwargs = kwargs or {} - self.return_value: Any = None - self.should_trace = self.debug or self.trace_extension is not None - self.prefix = self.logger.name.split(".")[-1] - - def trace(self, name: str, info: Dict[str, Any]) -> None: - if self.trace_extension is not None: - prefix_and_name = f"{self.prefix}.{name}" - ret = self.trace_extension(prefix_and_name, info) - if inspect.iscoroutine(ret): # pragma: no cover - raise TypeError( - "If you are using a synchronous interface, " - "the callback of the `trace` extension should " - "be a normal function instead of an asynchronous function." - ) - - if self.debug: - if not info or "return_value" in info and info["return_value"] is None: - message = name - else: - args = " ".join([f"{key}={value!r}" for key, value in info.items()]) - message = f"{name} {args}" - self.logger.debug(message) - - def __enter__(self) -> "Trace": - if self.should_trace: - info = self.kwargs - self.trace(f"{self.name}.started", info) - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - traceback: Optional[TracebackType] = None, - ) -> None: - if self.should_trace: - if exc_value is None: - info = {"return_value": self.return_value} - self.trace(f"{self.name}.complete", info) - else: - info = {"exception": exc_value} - self.trace(f"{self.name}.failed", info) - - async def atrace(self, name: str, info: Dict[str, Any]) -> None: - if self.trace_extension is not None: - prefix_and_name = f"{self.prefix}.{name}" - coro = self.trace_extension(prefix_and_name, info) - if not inspect.iscoroutine(coro): # pragma: no cover - raise TypeError( - "If you're using an asynchronous interface, " - "the callback of the `trace` extension should " - "be an asynchronous function rather than a normal function." - ) - await coro - - if self.debug: - if not info or "return_value" in info and info["return_value"] is None: - message = name - else: - args = " ".join([f"{key}={value!r}" for key, value in info.items()]) - message = f"{name} {args}" - self.logger.debug(message) - - async def __aenter__(self) -> "Trace": - if self.should_trace: - info = self.kwargs - await self.atrace(f"{self.name}.started", info) - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - traceback: Optional[TracebackType] = None, - ) -> None: - if self.should_trace: - if exc_value is None: - info = {"return_value": self.return_value} - await self.atrace(f"{self.name}.complete", info) - else: - info = {"exception": exc_value} - await self.atrace(f"{self.name}.failed", info) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/_compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/_compat.py deleted file mode 100644 index 974d431bd9828ef226e5c965dee56edd47d4f0ed..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/_compat.py +++ /dev/null @@ -1,11 +0,0 @@ -from __future__ import annotations - -from collections.abc import Mapping -import sys -from typing import Any - -DATACLASS_KWARGS: Mapping[str, Any] -if sys.version_info >= (3, 10): - DATACLASS_KWARGS = {"slots": True} -else: - DATACLASS_KWARGS = {} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_svg.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_svg.py deleted file mode 100644 index 01edbf870fb4169db2b7206a7d38a33c86d1f72b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_svg.py +++ /dev/null @@ -1,643 +0,0 @@ -import datetime -from io import BytesIO -from pathlib import Path -import xml.etree.ElementTree -import xml.parsers.expat - -import pytest - -import numpy as np - -import matplotlib as mpl -from matplotlib.figure import Figure -from matplotlib.text import Text -import matplotlib.pyplot as plt -from matplotlib.testing.decorators import check_figures_equal, image_comparison -from matplotlib.testing._markers import needs_usetex -from matplotlib import font_manager as fm -from matplotlib.offsetbox import (OffsetImage, AnnotationBbox) - - -def test_visibility(): - fig, ax = plt.subplots() - - x = np.linspace(0, 4 * np.pi, 50) - y = np.sin(x) - yerr = np.ones_like(y) - - a, b, c = ax.errorbar(x, y, yerr=yerr, fmt='ko') - for artist in b: - artist.set_visible(False) - - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue() - - parser = xml.parsers.expat.ParserCreate() - parser.Parse(buf) # this will raise ExpatError if the svg is invalid - - -@image_comparison(['fill_black_with_alpha.svg'], remove_text=True) -def test_fill_black_with_alpha(): - fig, ax = plt.subplots() - ax.scatter(x=[0, 0.1, 1], y=[0, 0, 0], c='k', alpha=0.1, s=10000) - - -@image_comparison(['noscale'], remove_text=True) -def test_noscale(): - X, Y = np.meshgrid(np.arange(-5, 5, 1), np.arange(-5, 5, 1)) - Z = np.sin(Y ** 2) - - fig, ax = plt.subplots() - ax.imshow(Z, cmap='gray', interpolation='none') - - -def test_text_urls(): - fig = plt.figure() - - test_url = "http://test_text_urls.matplotlib.org" - fig.suptitle("test_text_urls", url=test_url) - - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue().decode() - - expected = f'' - assert expected in buf - - -@image_comparison(['bold_font_output.svg']) -def test_bold_font_output(): - fig, ax = plt.subplots() - ax.plot(np.arange(10), np.arange(10)) - ax.set_xlabel('nonbold-xlabel') - ax.set_ylabel('bold-ylabel', fontweight='bold') - ax.set_title('bold-title', fontweight='bold') - - -@image_comparison(['bold_font_output_with_none_fonttype.svg']) -def test_bold_font_output_with_none_fonttype(): - plt.rcParams['svg.fonttype'] = 'none' - fig, ax = plt.subplots() - ax.plot(np.arange(10), np.arange(10)) - ax.set_xlabel('nonbold-xlabel') - ax.set_ylabel('bold-ylabel', fontweight='bold') - ax.set_title('bold-title', fontweight='bold') - - -@check_figures_equal(tol=20) -def test_rasterized(fig_test, fig_ref): - t = np.arange(0, 100) * (2.3) - x = np.cos(t) - y = np.sin(t) - - ax_ref = fig_ref.subplots() - ax_ref.plot(x, y, "-", c="r", lw=10) - ax_ref.plot(x+1, y, "-", c="b", lw=10) - - ax_test = fig_test.subplots() - ax_test.plot(x, y, "-", c="r", lw=10, rasterized=True) - ax_test.plot(x+1, y, "-", c="b", lw=10, rasterized=True) - - -@check_figures_equal() -def test_rasterized_ordering(fig_test, fig_ref): - t = np.arange(0, 100) * (2.3) - x = np.cos(t) - y = np.sin(t) - - ax_ref = fig_ref.subplots() - ax_ref.set_xlim(0, 3) - ax_ref.set_ylim(-1.1, 1.1) - ax_ref.plot(x, y, "-", c="r", lw=10, rasterized=True) - ax_ref.plot(x+1, y, "-", c="b", lw=10, rasterized=False) - ax_ref.plot(x+2, y, "-", c="g", lw=10, rasterized=True) - ax_ref.plot(x+3, y, "-", c="m", lw=10, rasterized=True) - - ax_test = fig_test.subplots() - ax_test.set_xlim(0, 3) - ax_test.set_ylim(-1.1, 1.1) - ax_test.plot(x, y, "-", c="r", lw=10, rasterized=True, zorder=1.1) - ax_test.plot(x+2, y, "-", c="g", lw=10, rasterized=True, zorder=1.3) - ax_test.plot(x+3, y, "-", c="m", lw=10, rasterized=True, zorder=1.4) - ax_test.plot(x+1, y, "-", c="b", lw=10, rasterized=False, zorder=1.2) - - -@check_figures_equal(tol=5, extensions=['svg', 'pdf']) -def test_prevent_rasterization(fig_test, fig_ref): - loc = [0.05, 0.05] - - ax_ref = fig_ref.subplots() - - ax_ref.plot([loc[0]], [loc[1]], marker="x", c="black", zorder=2) - - b = mpl.offsetbox.TextArea("X") - abox = mpl.offsetbox.AnnotationBbox(b, loc, zorder=2.1) - ax_ref.add_artist(abox) - - ax_test = fig_test.subplots() - ax_test.plot([loc[0]], [loc[1]], marker="x", c="black", zorder=2, - rasterized=True) - - b = mpl.offsetbox.TextArea("X") - abox = mpl.offsetbox.AnnotationBbox(b, loc, zorder=2.1) - ax_test.add_artist(abox) - - -def test_count_bitmaps(): - def count_tag(fig, tag): - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue().decode() - return buf.count(f"<{tag}") - - # No rasterized elements - fig1 = plt.figure() - ax1 = fig1.add_subplot(1, 1, 1) - ax1.set_axis_off() - for n in range(5): - ax1.plot([0, 20], [0, n], "b-", rasterized=False) - assert count_tag(fig1, "image") == 0 - assert count_tag(fig1, "path") == 6 # axis patch plus lines - - # rasterized can be merged - fig2 = plt.figure() - ax2 = fig2.add_subplot(1, 1, 1) - ax2.set_axis_off() - for n in range(5): - ax2.plot([0, 20], [0, n], "b-", rasterized=True) - assert count_tag(fig2, "image") == 1 - assert count_tag(fig2, "path") == 1 # axis patch - - # rasterized can't be merged without affecting draw order - fig3 = plt.figure() - ax3 = fig3.add_subplot(1, 1, 1) - ax3.set_axis_off() - for n in range(5): - ax3.plot([0, 20], [n, 0], "b-", rasterized=False) - ax3.plot([0, 20], [0, n], "b-", rasterized=True) - assert count_tag(fig3, "image") == 5 - assert count_tag(fig3, "path") == 6 - - # rasterized whole axes - fig4 = plt.figure() - ax4 = fig4.add_subplot(1, 1, 1) - ax4.set_axis_off() - ax4.set_rasterized(True) - for n in range(5): - ax4.plot([0, 20], [n, 0], "b-", rasterized=False) - ax4.plot([0, 20], [0, n], "b-", rasterized=True) - assert count_tag(fig4, "image") == 1 - assert count_tag(fig4, "path") == 1 - - # rasterized can be merged, but inhibited by suppressComposite - fig5 = plt.figure() - fig5.suppressComposite = True - ax5 = fig5.add_subplot(1, 1, 1) - ax5.set_axis_off() - for n in range(5): - ax5.plot([0, 20], [0, n], "b-", rasterized=True) - assert count_tag(fig5, "image") == 5 - assert count_tag(fig5, "path") == 1 # axis patch - - -# Use Computer Modern Sans Serif, not Helvetica (which has no \textwon). -@mpl.style.context('default') -@needs_usetex -def test_unicode_won(): - fig = Figure() - fig.text(.5, .5, r'\textwon', usetex=True) - - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue() - - tree = xml.etree.ElementTree.fromstring(buf) - ns = 'http://www.w3.org/2000/svg' - won_id = 'SFSS3583-8e' - assert len(tree.findall(f'.//{{{ns}}}path[@d][@id="{won_id}"]')) == 1 - assert f'#{won_id}' in tree.find(f'.//{{{ns}}}use').attrib.values() - - -def test_svgnone_with_data_coordinates(): - plt.rcParams.update({'svg.fonttype': 'none', 'font.stretch': 'condensed'}) - expected = 'Unlikely to appear by chance' - - fig, ax = plt.subplots() - ax.text(np.datetime64('2019-06-30'), 1, expected) - ax.set_xlim(np.datetime64('2019-01-01'), np.datetime64('2019-12-31')) - ax.set_ylim(0, 2) - - with BytesIO() as fd: - fig.savefig(fd, format='svg') - fd.seek(0) - buf = fd.read().decode() - - assert expected in buf and "condensed" in buf - - -def test_gid(): - """Test that object gid appears in output svg.""" - from matplotlib.offsetbox import OffsetBox - from matplotlib.axis import Tick - - fig = plt.figure() - - ax1 = fig.add_subplot(131) - ax1.imshow([[1., 2.], [2., 3.]], aspect="auto") - ax1.scatter([1, 2, 3], [1, 2, 3], label="myscatter") - ax1.plot([2, 3, 1], label="myplot") - ax1.legend() - ax1a = ax1.twinx() - ax1a.bar([1, 2, 3], [1, 2, 3]) - - ax2 = fig.add_subplot(132, projection="polar") - ax2.plot([0, 1.5, 3], [1, 2, 3]) - - ax3 = fig.add_subplot(133, projection="3d") - ax3.plot([1, 2], [1, 2], [1, 2]) - - fig.canvas.draw() - - gdic = {} - for idx, obj in enumerate(fig.findobj(include_self=True)): - if obj.get_visible(): - gid = f"test123{obj.__class__.__name__}_{idx}" - gdic[gid] = obj - obj.set_gid(gid) - - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue().decode() - - def include(gid, obj): - # we need to exclude certain objects which will not appear in the svg - if isinstance(obj, OffsetBox): - return False - if isinstance(obj, Text): - if obj.get_text() == "": - return False - elif obj.axes is None: - return False - if isinstance(obj, plt.Line2D): - xdata, ydata = obj.get_data() - if len(xdata) == len(ydata) == 1: - return False - elif not hasattr(obj, "axes") or obj.axes is None: - return False - if isinstance(obj, Tick): - loc = obj.get_loc() - if loc == 0: - return False - vi = obj.get_view_interval() - if loc < min(vi) or loc > max(vi): - return False - return True - - for gid, obj in gdic.items(): - if include(gid, obj): - assert gid in buf - - -def test_savefig_tight(): - # Check that the draw-disabled renderer correctly disables open/close_group - # as well. - plt.savefig(BytesIO(), format="svgz", bbox_inches="tight") - - -def test_url(): - # Test that object url appears in output svg. - - fig, ax = plt.subplots() - - # collections - s = ax.scatter([1, 2, 3], [4, 5, 6]) - s.set_urls(['https://example.com/foo', 'https://example.com/bar', None]) - - # Line2D - p, = plt.plot([1, 3], [6, 5]) - p.set_url('https://example.com/baz') - - b = BytesIO() - fig.savefig(b, format='svg') - b = b.getvalue() - for v in [b'foo', b'bar', b'baz']: - assert b'https://example.com/' + v in b - - -def test_url_tick(monkeypatch): - monkeypatch.setenv('SOURCE_DATE_EPOCH', '19680801') - - fig1, ax = plt.subplots() - ax.scatter([1, 2, 3], [4, 5, 6]) - for i, tick in enumerate(ax.yaxis.get_major_ticks()): - tick.set_url(f'https://example.com/{i}') - - fig2, ax = plt.subplots() - ax.scatter([1, 2, 3], [4, 5, 6]) - for i, tick in enumerate(ax.yaxis.get_major_ticks()): - tick.label1.set_url(f'https://example.com/{i}') - tick.label2.set_url(f'https://example.com/{i}') - - b1 = BytesIO() - fig1.savefig(b1, format='svg') - b1 = b1.getvalue() - - b2 = BytesIO() - fig2.savefig(b2, format='svg') - b2 = b2.getvalue() - - for i in range(len(ax.yaxis.get_major_ticks())): - assert f'https://example.com/{i}'.encode('ascii') in b1 - assert b1 == b2 - - -def test_svg_default_metadata(monkeypatch): - # Values have been predefined for 'Creator', 'Date', 'Format', and 'Type'. - monkeypatch.setenv('SOURCE_DATE_EPOCH', '19680801') - - fig, ax = plt.subplots() - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue().decode() - - # Creator - assert mpl.__version__ in buf - # Date - assert '1970-08-16' in buf - # Format - assert 'image/svg+xml' in buf - # Type - assert 'StillImage' in buf - - # Now make sure all the default metadata can be cleared. - with BytesIO() as fd: - fig.savefig(fd, format='svg', metadata={'Date': None, 'Creator': None, - 'Format': None, 'Type': None}) - buf = fd.getvalue().decode() - - # Creator - assert mpl.__version__ not in buf - # Date - assert '1970-08-16' not in buf - # Format - assert 'image/svg+xml' not in buf - # Type - assert 'StillImage' not in buf - - -def test_svg_clear_default_metadata(monkeypatch): - # Makes sure that setting a default metadata to `None` - # removes the corresponding tag from the metadata. - monkeypatch.setenv('SOURCE_DATE_EPOCH', '19680801') - - metadata_contains = {'creator': mpl.__version__, 'date': '1970-08-16', - 'format': 'image/svg+xml', 'type': 'StillImage'} - - SVGNS = '{http://www.w3.org/2000/svg}' - RDFNS = '{http://www.w3.org/1999/02/22-rdf-syntax-ns#}' - CCNS = '{http://creativecommons.org/ns#}' - DCNS = '{http://purl.org/dc/elements/1.1/}' - - fig, ax = plt.subplots() - for name in metadata_contains: - with BytesIO() as fd: - fig.savefig(fd, format='svg', metadata={name.title(): None}) - buf = fd.getvalue().decode() - - root = xml.etree.ElementTree.fromstring(buf) - work, = root.findall(f'./{SVGNS}metadata/{RDFNS}RDF/{CCNS}Work') - for key in metadata_contains: - data = work.findall(f'./{DCNS}{key}') - if key == name: - # The one we cleared is not there - assert not data - continue - # Everything else should be there - data, = data - xmlstr = xml.etree.ElementTree.tostring(data, encoding="unicode") - assert metadata_contains[key] in xmlstr - - -def test_svg_clear_all_metadata(): - # Makes sure that setting all default metadata to `None` - # removes the metadata tag from the output. - - fig, ax = plt.subplots() - with BytesIO() as fd: - fig.savefig(fd, format='svg', metadata={'Date': None, 'Creator': None, - 'Format': None, 'Type': None}) - buf = fd.getvalue().decode() - - SVGNS = '{http://www.w3.org/2000/svg}' - - root = xml.etree.ElementTree.fromstring(buf) - assert not root.findall(f'./{SVGNS}metadata') - - -def test_svg_metadata(): - single_value = ['Coverage', 'Identifier', 'Language', 'Relation', 'Source', - 'Title', 'Type'] - multi_value = ['Contributor', 'Creator', 'Keywords', 'Publisher', 'Rights'] - metadata = { - 'Date': [datetime.date(1968, 8, 1), - datetime.datetime(1968, 8, 2, 1, 2, 3)], - 'Description': 'description\ntext', - **{k: f'{k} foo' for k in single_value}, - **{k: [f'{k} bar', f'{k} baz'] for k in multi_value}, - } - - fig = plt.figure() - with BytesIO() as fd: - fig.savefig(fd, format='svg', metadata=metadata) - buf = fd.getvalue().decode() - - SVGNS = '{http://www.w3.org/2000/svg}' - RDFNS = '{http://www.w3.org/1999/02/22-rdf-syntax-ns#}' - CCNS = '{http://creativecommons.org/ns#}' - DCNS = '{http://purl.org/dc/elements/1.1/}' - - root = xml.etree.ElementTree.fromstring(buf) - rdf, = root.findall(f'./{SVGNS}metadata/{RDFNS}RDF') - - # Check things that are single entries. - titles = [node.text for node in root.findall(f'./{SVGNS}title')] - assert titles == [metadata['Title']] - types = [node.attrib[f'{RDFNS}resource'] - for node in rdf.findall(f'./{CCNS}Work/{DCNS}type')] - assert types == [metadata['Type']] - for k in ['Description', *single_value]: - if k == 'Type': - continue - values = [node.text - for node in rdf.findall(f'./{CCNS}Work/{DCNS}{k.lower()}')] - assert values == [metadata[k]] - - # Check things that are multi-value entries. - for k in multi_value: - if k == 'Keywords': - continue - values = [ - node.text - for node in rdf.findall( - f'./{CCNS}Work/{DCNS}{k.lower()}/{CCNS}Agent/{DCNS}title')] - assert values == metadata[k] - - # Check special things. - dates = [node.text for node in rdf.findall(f'./{CCNS}Work/{DCNS}date')] - assert dates == ['1968-08-01/1968-08-02T01:02:03'] - - values = [node.text for node in - rdf.findall(f'./{CCNS}Work/{DCNS}subject/{RDFNS}Bag/{RDFNS}li')] - assert values == metadata['Keywords'] - - -@image_comparison(["multi_font_aspath.svg"], tol=1.8) -def test_multi_font_type3(): - fp = fm.FontProperties(family=["WenQuanYi Zen Hei"]) - if Path(fm.findfont(fp)).name != "wqy-zenhei.ttc": - pytest.skip("Font may be missing") - - plt.rc('font', family=['DejaVu Sans', 'WenQuanYi Zen Hei'], size=27) - plt.rc('svg', fonttype='path') - - fig = plt.figure() - fig.text(0.15, 0.475, "There are 几个汉字 in between!") - - -@image_comparison(["multi_font_astext.svg"]) -def test_multi_font_type42(): - fp = fm.FontProperties(family=["WenQuanYi Zen Hei"]) - if Path(fm.findfont(fp)).name != "wqy-zenhei.ttc": - pytest.skip("Font may be missing") - - fig = plt.figure() - plt.rc('svg', fonttype='none') - - plt.rc('font', family=['DejaVu Sans', 'WenQuanYi Zen Hei'], size=27) - fig.text(0.15, 0.475, "There are 几个汉字 in between!") - - -@pytest.mark.parametrize('metadata,error,message', [ - ({'Date': 1}, TypeError, "Invalid type for Date metadata. Expected str"), - ({'Date': [1]}, TypeError, - "Invalid type for Date metadata. Expected iterable"), - ({'Keywords': 1}, TypeError, - "Invalid type for Keywords metadata. Expected str"), - ({'Keywords': [1]}, TypeError, - "Invalid type for Keywords metadata. Expected iterable"), - ({'Creator': 1}, TypeError, - "Invalid type for Creator metadata. Expected str"), - ({'Creator': [1]}, TypeError, - "Invalid type for Creator metadata. Expected iterable"), - ({'Title': 1}, TypeError, - "Invalid type for Title metadata. Expected str"), - ({'Format': 1}, TypeError, - "Invalid type for Format metadata. Expected str"), - ({'Foo': 'Bar'}, ValueError, "Unknown metadata key"), - ]) -def test_svg_incorrect_metadata(metadata, error, message): - with pytest.raises(error, match=message), BytesIO() as fd: - fig = plt.figure() - fig.savefig(fd, format='svg', metadata=metadata) - - -def test_svg_escape(): - fig = plt.figure() - fig.text(0.5, 0.5, "<\'\"&>", gid="<\'\"&>") - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue().decode() - assert '<'"&>"' in buf - - -@pytest.mark.parametrize("font_str", [ - "'DejaVu Sans', 'WenQuanYi Zen Hei', 'Arial', sans-serif", - "'DejaVu Serif', 'WenQuanYi Zen Hei', 'Times New Roman', serif", - "'Arial', 'WenQuanYi Zen Hei', cursive", - "'Impact', 'WenQuanYi Zen Hei', fantasy", - "'DejaVu Sans Mono', 'WenQuanYi Zen Hei', 'Courier New', monospace", - # These do not work because the logic to get the font metrics will not find - # WenQuanYi as the fallback logic stops with the first fallback font: - # "'DejaVu Sans Mono', 'Courier New', 'WenQuanYi Zen Hei', monospace", - # "'DejaVu Sans', 'Arial', 'WenQuanYi Zen Hei', sans-serif", - # "'DejaVu Serif', 'Times New Roman', 'WenQuanYi Zen Hei', serif", -]) -@pytest.mark.parametrize("include_generic", [True, False]) -def test_svg_font_string(font_str, include_generic): - fp = fm.FontProperties(family=["WenQuanYi Zen Hei"]) - if Path(fm.findfont(fp)).name != "wqy-zenhei.ttc": - pytest.skip("Font may be missing") - - explicit, *rest, generic = map( - lambda x: x.strip("'"), font_str.split(", ") - ) - size = len(generic) - if include_generic: - rest = rest + [generic] - plt.rcParams[f"font.{generic}"] = rest - plt.rcParams["font.size"] = size - plt.rcParams["svg.fonttype"] = "none" - - fig, ax = plt.subplots() - if generic == "sans-serif": - generic_options = ["sans", "sans-serif", "sans serif"] - else: - generic_options = [generic] - - for generic_name in generic_options: - # test that fallback works - ax.text(0.5, 0.5, "There are 几个汉字 in between!", - family=[explicit, generic_name], ha="center") - # test deduplication works - ax.text(0.5, 0.1, "There are 几个汉字 in between!", - family=[explicit, *rest, generic_name], ha="center") - ax.axis("off") - - with BytesIO() as fd: - fig.savefig(fd, format="svg") - buf = fd.getvalue() - - tree = xml.etree.ElementTree.fromstring(buf) - ns = "http://www.w3.org/2000/svg" - text_count = 0 - for text_element in tree.findall(f".//{{{ns}}}text"): - text_count += 1 - font_info = dict( - map(lambda x: x.strip(), _.strip().split(":")) - for _ in dict(text_element.items())["style"].split(";") - )["font"] - - assert font_info == f"{size}px {font_str}" - assert text_count == len(ax.texts) - - -def test_annotationbbox_gid(): - # Test that object gid appears in the AnnotationBbox - # in output svg. - fig = plt.figure() - ax = fig.add_subplot() - arr_img = np.ones((32, 32)) - xy = (0.3, 0.55) - - imagebox = OffsetImage(arr_img, zoom=0.1) - imagebox.image.axes = ax - - ab = AnnotationBbox(imagebox, xy, - xybox=(120., -80.), - xycoords='data', - boxcoords="offset points", - pad=0.5, - arrowprops=dict( - arrowstyle="->", - connectionstyle="angle,angleA=0,angleB=90,rad=3") - ) - ab.set_gid("a test for issue 20044") - ax.add_artist(ab) - - with BytesIO() as fd: - fig.savefig(fd, format='svg') - buf = fd.getvalue().decode('utf-8') - - expected = '' - assert expected in buf diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/return_logical/foo77.f b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/return_logical/foo77.f deleted file mode 100644 index ef530145fedf8b5cf3a05bdf0a46a4e22150007b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/return_logical/foo77.f +++ /dev/null @@ -1,56 +0,0 @@ - function t0(value) - logical value - logical t0 - t0 = value - end - function t1(value) - logical*1 value - logical*1 t1 - t1 = value - end - function t2(value) - logical*2 value - logical*2 t2 - t2 = value - end - function t4(value) - logical*4 value - logical*4 t4 - t4 = value - end -c function t8(value) -c logical*8 value -c logical*8 t8 -c t8 = value -c end - - subroutine s0(t0,value) - logical value - logical t0 -cf2py intent(out) t0 - t0 = value - end - subroutine s1(t1,value) - logical*1 value - logical*1 t1 -cf2py intent(out) t1 - t1 = value - end - subroutine s2(t2,value) - logical*2 value - logical*2 t2 -cf2py intent(out) t2 - t2 = value - end - subroutine s4(t4,value) - logical*4 value - logical*4 t4 -cf2py intent(out) t4 - t4 = value - end -c subroutine s8(t8,value) -c logical*8 value -c logical*8 t8 -cf2py intent(out) t8 -c t8 = value -c end diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/period/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/period/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_multi_thread.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_multi_thread.py deleted file mode 100644 index c5b757d619e7ab17a528b1c3f003592db5c053a9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_multi_thread.py +++ /dev/null @@ -1,147 +0,0 @@ -""" -Tests multithreading behaviour for reading and -parsing files for each parser defined in parsers.py -""" -from contextlib import ExitStack -from io import BytesIO -from multiprocessing.pool import ThreadPool - -import numpy as np -import pytest - -import pandas as pd -from pandas import DataFrame -import pandas._testing as tm - -# We'll probably always skip these for pyarrow -# Maybe we'll add our own tests for pyarrow too -pytestmark = [ - pytest.mark.single_cpu, - pytest.mark.slow, - pytest.mark.usefixtures("pyarrow_skip"), -] - - -def test_multi_thread_string_io_read_csv(all_parsers): - # see gh-11786 - parser = all_parsers - max_row_range = 100 - num_files = 10 - - bytes_to_df = ( - "\n".join([f"{i:d},{i:d},{i:d}" for i in range(max_row_range)]).encode() - for _ in range(num_files) - ) - - # Read all files in many threads. - with ExitStack() as stack: - files = [stack.enter_context(BytesIO(b)) for b in bytes_to_df] - - pool = stack.enter_context(ThreadPool(8)) - - results = pool.map(parser.read_csv, files) - first_result = results[0] - - for result in results: - tm.assert_frame_equal(first_result, result) - - -def _generate_multi_thread_dataframe(parser, path, num_rows, num_tasks): - """ - Generate a DataFrame via multi-thread. - - Parameters - ---------- - parser : BaseParser - The parser object to use for reading the data. - path : str - The location of the CSV file to read. - num_rows : int - The number of rows to read per task. - num_tasks : int - The number of tasks to use for reading this DataFrame. - - Returns - ------- - df : DataFrame - """ - - def reader(arg): - """ - Create a reader for part of the CSV. - - Parameters - ---------- - arg : tuple - A tuple of the following: - - * start : int - The starting row to start for parsing CSV - * nrows : int - The number of rows to read. - - Returns - ------- - df : DataFrame - """ - start, nrows = arg - - if not start: - return parser.read_csv( - path, index_col=0, header=0, nrows=nrows, parse_dates=["date"] - ) - - return parser.read_csv( - path, - index_col=0, - header=None, - skiprows=int(start) + 1, - nrows=nrows, - parse_dates=[9], - ) - - tasks = [ - (num_rows * i // num_tasks, num_rows // num_tasks) for i in range(num_tasks) - ] - - with ThreadPool(processes=num_tasks) as pool: - results = pool.map(reader, tasks) - - header = results[0].columns - - for r in results[1:]: - r.columns = header - - final_dataframe = pd.concat(results) - return final_dataframe - - -def test_multi_thread_path_multipart_read_csv(all_parsers): - # see gh-11786 - num_tasks = 4 - num_rows = 48 - - parser = all_parsers - file_name = "__thread_pool_reader__.csv" - df = DataFrame( - { - "a": np.random.default_rng(2).random(num_rows), - "b": np.random.default_rng(2).random(num_rows), - "c": np.random.default_rng(2).random(num_rows), - "d": np.random.default_rng(2).random(num_rows), - "e": np.random.default_rng(2).random(num_rows), - "foo": ["foo"] * num_rows, - "bar": ["bar"] * num_rows, - "baz": ["baz"] * num_rows, - "date": pd.date_range("20000101 09:00:00", periods=num_rows, freq="s"), - "int": np.arange(num_rows, dtype="int64"), - } - ) - - with tm.ensure_clean(file_name) as path: - df.to_csv(path) - - final_dataframe = _generate_multi_thread_dataframe( - parser, path, num_rows, num_tasks - ) - tm.assert_frame_equal(df, final_dataframe) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/resample/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/resample/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pep517/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pep517/__init__.py deleted file mode 100644 index 2b6b8856790aab7b0e1f70ad7a75be7f9a21e736..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pep517/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -"""Wrappers to build Python packages using PEP 517 hooks -""" - -__version__ = '0.12.0' - -from .wrappers import * # noqa: F401, F403 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/packaging/requirements.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/packaging/requirements.py deleted file mode 100644 index 5d50c7d7e20c8b390edc0e6a2c362161641117d4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/packaging/requirements.py +++ /dev/null @@ -1,145 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. -from __future__ import absolute_import, division, print_function - -import string -import re - -from setuptools.extern.pyparsing import stringStart, stringEnd, originalTextFor, ParseException -from setuptools.extern.pyparsing import ZeroOrMore, Word, Optional, Regex, Combine -from setuptools.extern.pyparsing import Literal as L # noqa -from urllib import parse as urlparse - -from ._typing import TYPE_CHECKING -from .markers import MARKER_EXPR, Marker -from .specifiers import LegacySpecifier, Specifier, SpecifierSet - -if TYPE_CHECKING: # pragma: no cover - from typing import List - - -class InvalidRequirement(ValueError): - """ - An invalid requirement was found, users should refer to PEP 508. - """ - - -ALPHANUM = Word(string.ascii_letters + string.digits) - -LBRACKET = L("[").suppress() -RBRACKET = L("]").suppress() -LPAREN = L("(").suppress() -RPAREN = L(")").suppress() -COMMA = L(",").suppress() -SEMICOLON = L(";").suppress() -AT = L("@").suppress() - -PUNCTUATION = Word("-_.") -IDENTIFIER_END = ALPHANUM | (ZeroOrMore(PUNCTUATION) + ALPHANUM) -IDENTIFIER = Combine(ALPHANUM + ZeroOrMore(IDENTIFIER_END)) - -NAME = IDENTIFIER("name") -EXTRA = IDENTIFIER - -URI = Regex(r"[^ ]+")("url") -URL = AT + URI - -EXTRAS_LIST = EXTRA + ZeroOrMore(COMMA + EXTRA) -EXTRAS = (LBRACKET + Optional(EXTRAS_LIST) + RBRACKET)("extras") - -VERSION_PEP440 = Regex(Specifier._regex_str, re.VERBOSE | re.IGNORECASE) -VERSION_LEGACY = Regex(LegacySpecifier._regex_str, re.VERBOSE | re.IGNORECASE) - -VERSION_ONE = VERSION_PEP440 ^ VERSION_LEGACY -VERSION_MANY = Combine( - VERSION_ONE + ZeroOrMore(COMMA + VERSION_ONE), joinString=",", adjacent=False -)("_raw_spec") -_VERSION_SPEC = Optional(((LPAREN + VERSION_MANY + RPAREN) | VERSION_MANY)) -_VERSION_SPEC.setParseAction(lambda s, l, t: t._raw_spec or "") - -VERSION_SPEC = originalTextFor(_VERSION_SPEC)("specifier") -VERSION_SPEC.setParseAction(lambda s, l, t: t[1]) - -MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") -MARKER_EXPR.setParseAction( - lambda s, l, t: Marker(s[t._original_start : t._original_end]) -) -MARKER_SEPARATOR = SEMICOLON -MARKER = MARKER_SEPARATOR + MARKER_EXPR - -VERSION_AND_MARKER = VERSION_SPEC + Optional(MARKER) -URL_AND_MARKER = URL + Optional(MARKER) - -NAMED_REQUIREMENT = NAME + Optional(EXTRAS) + (URL_AND_MARKER | VERSION_AND_MARKER) - -REQUIREMENT = stringStart + NAMED_REQUIREMENT + stringEnd -# setuptools.extern.pyparsing isn't thread safe during initialization, so we do it eagerly, see -# issue #104 -REQUIREMENT.parseString("x[]") - - -class Requirement(object): - """Parse a requirement. - - Parse a given requirement string into its parts, such as name, specifier, - URL, and extras. Raises InvalidRequirement on a badly-formed requirement - string. - """ - - # TODO: Can we test whether something is contained within a requirement? - # If so how do we do that? Do we need to test against the _name_ of - # the thing as well as the version? What about the markers? - # TODO: Can we normalize the name and extra name? - - def __init__(self, requirement_string): - # type: (str) -> None - try: - req = REQUIREMENT.parseString(requirement_string) - except ParseException as e: - raise InvalidRequirement( - 'Parse error at "{0!r}": {1}'.format( - requirement_string[e.loc : e.loc + 8], e.msg - ) - ) - - self.name = req.name - if req.url: - parsed_url = urlparse.urlparse(req.url) - if parsed_url.scheme == "file": - if urlparse.urlunparse(parsed_url) != req.url: - raise InvalidRequirement("Invalid URL given") - elif not (parsed_url.scheme and parsed_url.netloc) or ( - not parsed_url.scheme and not parsed_url.netloc - ): - raise InvalidRequirement("Invalid URL: {0}".format(req.url)) - self.url = req.url - else: - self.url = None - self.extras = set(req.extras.asList() if req.extras else []) - self.specifier = SpecifierSet(req.specifier) - self.marker = req.marker if req.marker else None - - def __str__(self): - # type: () -> str - parts = [self.name] # type: List[str] - - if self.extras: - parts.append("[{0}]".format(",".join(sorted(self.extras)))) - - if self.specifier: - parts.append(str(self.specifier)) - - if self.url: - parts.append("@ {0}".format(self.url)) - if self.marker: - parts.append(" ") - - if self.marker: - parts.append("; {0}".format(self.marker)) - - return "".join(parts) - - def __repr__(self): - # type: () -> str - return "".format(str(self)) diff --git a/spaces/qingxu98/gpt-academic/multi_language.py b/spaces/qingxu98/gpt-academic/multi_language.py deleted file mode 100644 index 8e3ac9d7208d4258be408e8ddc3126baf98c6967..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/multi_language.py +++ /dev/null @@ -1,519 +0,0 @@ -""" - Translate this project to other languages (experimental, please open an issue if there is any bug) - - - Usage: - 1. modify config.py, set your LLM_MODEL and API_KEY(s) to provide access to OPENAI (or any other LLM model provider) - - 2. modify LANG (below ↓) - LANG = "English" - - 3. modify TransPrompt (below ↓) - TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." - - 4. Run `python multi_language.py`. - Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes. - - 5. Find the translated program in `multi-language\English\*` - - P.S. - - - The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there. - - - If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request - - - If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request - - - Welcome any Pull Request, regardless of language -""" - -import os -import json -import functools -import re -import pickle -import time -from toolbox import get_conf - -CACHE_FOLDER, = get_conf('PATH_LOGGING') - -blacklist = ['multi-language', CACHE_FOLDER, '.git', 'private_upload', 'multi_language.py', 'build', '.github', '.vscode', '__pycache__', 'venv'] - -# LANG = "TraditionalChinese" -# TransPrompt = f"Replace each json value `#` with translated results in Traditional Chinese, e.g., \"原始文本\":\"翻譯後文字\". Keep Json format. Do not answer #." - -# LANG = "Japanese" -# TransPrompt = f"Replace each json value `#` with translated results in Japanese, e.g., \"原始文本\":\"テキストの翻訳\". Keep Json format. Do not answer #." - -LANG = "English" -TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." - - -if not os.path.exists(CACHE_FOLDER): - os.makedirs(CACHE_FOLDER) - - -def lru_file_cache(maxsize=128, ttl=None, filename=None): - """ - Decorator that caches a function's return value after being called with given arguments. - It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache. - maxsize: Maximum size of the cache. Defaults to 128. - ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache. - filename: Name of the file to store the cache in. If not supplied, the function name + ".cache" will be used. - """ - cache_path = os.path.join(CACHE_FOLDER, f"{filename}.cache") if filename is not None else None - - def decorator_function(func): - cache = {} - _cache_info = { - "hits": 0, - "misses": 0, - "maxsize": maxsize, - "currsize": 0, - "ttl": ttl, - "filename": cache_path, - } - - @functools.wraps(func) - def wrapper_function(*args, **kwargs): - key = str((args, frozenset(kwargs))) - if key in cache: - if _cache_info["ttl"] is None or (cache[key][1] + _cache_info["ttl"]) >= time.time(): - _cache_info["hits"] += 1 - print(f'Warning, reading cache, last read {(time.time()-cache[key][1])//60} minutes ago'); time.sleep(2) - cache[key][1] = time.time() - return cache[key][0] - else: - del cache[key] - - result = func(*args, **kwargs) - cache[key] = [result, time.time()] - _cache_info["misses"] += 1 - _cache_info["currsize"] += 1 - - if _cache_info["currsize"] > _cache_info["maxsize"]: - oldest_key = None - for k in cache: - if oldest_key is None: - oldest_key = k - elif cache[k][1] < cache[oldest_key][1]: - oldest_key = k - del cache[oldest_key] - _cache_info["currsize"] -= 1 - - if cache_path is not None: - with open(cache_path, "wb") as f: - pickle.dump(cache, f) - - return result - - def cache_info(): - return _cache_info - - wrapper_function.cache_info = cache_info - - if cache_path is not None and os.path.exists(cache_path): - with open(cache_path, "rb") as f: - cache = pickle.load(f) - _cache_info["currsize"] = len(cache) - - return wrapper_function - - return decorator_function - -def contains_chinese(string): - """ - Returns True if the given string contains Chinese characters, False otherwise. - """ - chinese_regex = re.compile(u'[\u4e00-\u9fff]+') - return chinese_regex.search(string) is not None - -def split_list(lst, n_each_req): - """ - Split a list into smaller lists, each with a maximum number of elements. - :param lst: the list to split - :param n_each_req: the maximum number of elements in each sub-list - :return: a list of sub-lists - """ - result = [] - for i in range(0, len(lst), n_each_req): - result.append(lst[i:i + n_each_req]) - return result - -def map_to_json(map, language): - dict_ = read_map_from_json(language) - dict_.update(map) - with open(f'docs/translate_{language.lower()}.json', 'w', encoding='utf8') as f: - json.dump(dict_, f, indent=4, ensure_ascii=False) - -def read_map_from_json(language): - if os.path.exists(f'docs/translate_{language.lower()}.json'): - with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f: - res = json.load(f) - res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)} - return res - return {} - -def advanced_split(splitted_string, spliter, include_spliter=False): - splitted_string_tmp = [] - for string_ in splitted_string: - if spliter in string_: - splitted = string_.split(spliter) - for i, s in enumerate(splitted): - if include_spliter: - if i != len(splitted)-1: - splitted[i] += spliter - splitted[i] = splitted[i].strip() - for i in reversed(range(len(splitted))): - if not contains_chinese(splitted[i]): - splitted.pop(i) - splitted_string_tmp.extend(splitted) - else: - splitted_string_tmp.append(string_) - splitted_string = splitted_string_tmp - return splitted_string_tmp - -cached_translation = {} -cached_translation = read_map_from_json(language=LANG) - -def trans(word_to_translate, language, special=False): - if len(word_to_translate) == 0: return {} - from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from toolbox import get_conf, ChatBotWithCookies - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - llm_kwargs = { - 'api_key': API_KEY, - 'llm_model': LLM_MODEL, - 'top_p':1.0, - 'max_length': None, - 'temperature':0.4, - } - import random - N_EACH_REQ = random.randint(16, 32) - word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) - inputs_array = [str(s) for s in word_to_translate_split] - inputs_show_user_array = inputs_array - history_array = [[] for _ in inputs_array] - if special: # to English using CamelCase Naming Convention - sys_prompt_array = [f"Translate following names to English with CamelCase naming convention. Keep original format" for _ in inputs_array] - else: - sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array] - chatbot = ChatBotWithCookies(llm_kwargs) - gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, - inputs_show_user_array, - llm_kwargs, - chatbot, - history_array, - sys_prompt_array, - ) - while True: - try: - gpt_say = next(gpt_say_generator) - print(gpt_say[1][0][1]) - except StopIteration as e: - result = e.value - break - translated_result = {} - for i, r in enumerate(result): - if i%2 == 1: - try: - res_before_trans = eval(result[i-1]) - res_after_trans = eval(result[i]) - if len(res_before_trans) != len(res_after_trans): - raise RuntimeError - for a,b in zip(res_before_trans, res_after_trans): - translated_result[a] = b - except: - # try: - # res_before_trans = word_to_translate_split[(i-1)//2] - # res_after_trans = [s for s in result[i].split("', '")] - # for a,b in zip(res_before_trans, res_after_trans): - # translated_result[a] = b - # except: - print('GPT answers with unexpected format, some words may not be translated, but you can try again later to increase translation coverage.') - res_before_trans = eval(result[i-1]) - for a in res_before_trans: - translated_result[a] = None - return translated_result - - -def trans_json(word_to_translate, language, special=False): - if len(word_to_translate) == 0: return {} - from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from toolbox import get_conf, ChatBotWithCookies - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - llm_kwargs = { - 'api_key': API_KEY, - 'llm_model': LLM_MODEL, - 'top_p':1.0, - 'max_length': None, - 'temperature':0.1, - } - import random - N_EACH_REQ = random.randint(16, 32) - random.shuffle(word_to_translate) - word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) - inputs_array = [{k:"#" for k in s} for s in word_to_translate_split] - inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array] - - inputs_show_user_array = inputs_array - history_array = [[] for _ in inputs_array] - sys_prompt_array = [TransPrompt for _ in inputs_array] - chatbot = ChatBotWithCookies(llm_kwargs) - gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, - inputs_show_user_array, - llm_kwargs, - chatbot, - history_array, - sys_prompt_array, - ) - while True: - try: - gpt_say = next(gpt_say_generator) - print(gpt_say[1][0][1]) - except StopIteration as e: - result = e.value - break - translated_result = {} - for i, r in enumerate(result): - if i%2 == 1: - try: - translated_result.update(json.loads(result[i])) - except: - print(result[i]) - print(result) - return translated_result - - -def step_1_core_key_translate(): - LANG_STD = 'std' - def extract_chinese_characters(file_path): - syntax = [] - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - import ast - root = ast.parse(content) - for node in ast.walk(root): - if isinstance(node, ast.Name): - if contains_chinese(node.id): syntax.append(node.id) - if isinstance(node, ast.Import): - for n in node.names: - if contains_chinese(n.name): syntax.append(n.name) - elif isinstance(node, ast.ImportFrom): - for n in node.names: - if contains_chinese(n.name): syntax.append(n.name) - # if node.module is None: print(node.module) - for k in node.module.split('.'): - if contains_chinese(k): syntax.append(k) - return syntax - - def extract_chinese_characters_from_directory(directory_path): - chinese_characters = [] - for root, dirs, files in os.walk(directory_path): - if any([b in root for b in blacklist]): - continue - print(files) - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - chinese_characters.extend(extract_chinese_characters(file_path)) - return chinese_characters - - directory_path = './' - chinese_core_names = extract_chinese_characters_from_directory(directory_path) - chinese_core_keys = [name for name in chinese_core_names] - chinese_core_keys_norepeat = [] - for d in chinese_core_keys: - if d not in chinese_core_keys_norepeat: chinese_core_keys_norepeat.append(d) - need_translate = [] - cached_translation = read_map_from_json(language=LANG_STD) - cached_translation_keys = list(cached_translation.keys()) - for d in chinese_core_keys_norepeat: - if d not in cached_translation_keys: - need_translate.append(d) - - need_translate_mapping = trans(need_translate, language=LANG_STD, special=True) - map_to_json(need_translate_mapping, language=LANG_STD) - cached_translation = read_map_from_json(language=LANG_STD) - cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0]))) - - chinese_core_keys_norepeat_mapping = {} - for k in chinese_core_keys_norepeat: - chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]}) - chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0]))) - - # =============================================== - # copy - # =============================================== - def copy_source_code(): - - from toolbox import get_conf - import shutil - import os - try: shutil.rmtree(f'./multi-language/{LANG}/') - except: pass - os.makedirs(f'./multi-language', exist_ok=True) - backup_dir = f'./multi-language/{LANG}/' - shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist) - copy_source_code() - - # =============================================== - # primary key replace - # =============================================== - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - # read again - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - for k, v in chinese_core_keys_norepeat_mapping.items(): - content = content.replace(k, v) - - with open(file_path, 'w', encoding='utf-8') as f: - f.write(content) - - -def step_2_core_key_translate(): - - # ================================================================================================= - # step2 - # ================================================================================================= - - def load_string(strings, string_input): - string_ = string_input.strip().strip(',').strip().strip('.').strip() - if string_.startswith('[Local Message]'): - string_ = string_.replace('[Local Message]', '') - string_ = string_.strip().strip(',').strip().strip('.').strip() - splitted_string = [string_] - # -------------------------------------- - splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="。", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="<", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=">", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="[", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="]", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="【", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="】", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="?", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="#", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="\n", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=";", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="`", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False) - - # -------------------------------------- - for j, s in enumerate(splitted_string): # .com - if '.com' in s: continue - if '\'' in s: continue - if '\"' in s: continue - strings.append([s,0]) - - - def get_strings(node): - strings = [] - # recursively traverse the AST - for child in ast.iter_child_nodes(node): - node = child - if isinstance(child, ast.Str): - if contains_chinese(child.s): - load_string(strings=strings, string_input=child.s) - elif isinstance(child, ast.AST): - strings.extend(get_strings(child)) - return strings - - string_literals = [] - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - # comments - comments_arr = [] - for code_sp in content.splitlines(): - comments = re.findall(r'#.*$', code_sp) - for comment in comments: - load_string(strings=comments_arr, string_input=comment) - string_literals.extend(comments_arr) - - # strings - import ast - tree = ast.parse(content) - res = get_strings(tree, ) - string_literals.extend(res) - - [print(s) for s in string_literals] - chinese_literal_names = [] - chinese_literal_names_norepeat = [] - for string, offset in string_literals: - chinese_literal_names.append(string) - chinese_literal_names_norepeat = [] - for d in chinese_literal_names: - if d not in chinese_literal_names_norepeat: chinese_literal_names_norepeat.append(d) - need_translate = [] - cached_translation = read_map_from_json(language=LANG) - cached_translation_keys = list(cached_translation.keys()) - for d in chinese_literal_names_norepeat: - if d not in cached_translation_keys: - need_translate.append(d) - - - up = trans_json(need_translate, language=LANG, special=False) - map_to_json(up, language=LANG) - cached_translation = read_map_from_json(language=LANG) - LANG_STD = 'std' - cached_translation.update(read_map_from_json(language=LANG_STD)) - cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0]))) - - # =============================================== - # literal key replace - # =============================================== - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - # read again - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - for k, v in cached_translation.items(): - if v is None: continue - if '"' in v: - v = v.replace('"', "`") - if '\'' in v: - v = v.replace('\'', "`") - content = content.replace(k, v) - - with open(file_path, 'w', encoding='utf-8') as f: - f.write(content) - - if file.strip('.py') in cached_translation: - file_new = cached_translation[file.strip('.py')] + '.py' - file_path_new = os.path.join(root, file_new) - with open(file_path_new, 'w', encoding='utf-8') as f: - f.write(content) - os.remove(file_path) -step_1_core_key_translate() -step_2_core_key_translate() -print('Finished, checkout generated results at ./multi-language/') \ No newline at end of file diff --git a/spaces/qingxu98/gpt-academic/request_llm/edge_gpt_free.py b/spaces/qingxu98/gpt-academic/request_llm/edge_gpt_free.py deleted file mode 100644 index 22ff05272b634e8557ceb83501248cc238074366..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/request_llm/edge_gpt_free.py +++ /dev/null @@ -1,1125 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -""" -Main.py -""" - -import argparse -import asyncio -import json -import os -import random -import re -import ssl -import sys -import time -import uuid -from enum import Enum -from pathlib import Path -from typing import Generator -from typing import Literal -from typing import Optional -from typing import Union - -import aiohttp -import certifi -import httpx -from prompt_toolkit import PromptSession -from prompt_toolkit.auto_suggest import AutoSuggestFromHistory -from prompt_toolkit.completion import WordCompleter -from prompt_toolkit.history import InMemoryHistory -from prompt_toolkit.key_binding import KeyBindings -from rich.live import Live -from rich.markdown import Markdown - -DELIMITER = "\x1e" - - -# Generate random IP between range 13.104.0.0/14 -FORWARDED_IP = ( - f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}" -) - -HEADERS = { - "accept": "application/json", - "accept-language": "en-US,en;q=0.9", - "content-type": "application/json", - "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"109.0.1518.78"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": "", - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "x-ms-client-request-id": str(uuid.uuid4()), - "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32", - "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx", - "Referrer-Policy": "origin-when-cross-origin", - "x-forwarded-for": FORWARDED_IP, -} - -HEADERS_INIT_CONVER = { - "authority": "edgeservices.bing.com", - "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7", - "accept-language": "en-US,en;q=0.9", - "cache-control": "max-age=0", - "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"110.0.1587.69"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": '""', - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "document", - "sec-fetch-mode": "navigate", - "sec-fetch-site": "none", - "sec-fetch-user": "?1", - "upgrade-insecure-requests": "1", - "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69", - "x-edge-shopping-flag": "1", - "x-forwarded-for": FORWARDED_IP, -} - -ssl_context = ssl.create_default_context() -ssl_context.load_verify_locations(certifi.where()) - - -class NotAllowedToAccess(Exception): - pass - - -class ConversationStyle(Enum): - creative = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3imaginative", - "travelansgnd", - "dv3sugg", - "clgalileo", - "gencontentv3", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - balanced = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - precise = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "h3precise", - "clgalileo", - "nojbfedge", - ] - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] - - -def _append_identifier(msg: dict) -> str: - """ - Appends special character to end of message to identify end of message - """ - # Convert dict to json string - return json.dumps(msg, ensure_ascii=False) + DELIMITER - - -def _get_ran_hex(length: int = 32) -> str: - """ - Returns random hex string - """ - return "".join(random.choice("0123456789abcdef") for _ in range(length)) - - -class _ChatHubRequest: - """ - Request object for ChatHub - """ - - def __init__( - self, - conversation_signature: str, - client_id: str, - conversation_id: str, - invocation_id: int = 0, - ) -> None: - self.struct: dict = {} - - self.client_id: str = client_id - self.conversation_id: str = conversation_id - self.conversation_signature: str = conversation_signature - self.invocation_id: int = invocation_id - - def update( - self, - prompt: str, - conversation_style: CONVERSATION_STYLE_TYPE, - options = None, - webpage_context = None, - search_result = False, - ) -> None: - """ - Updates request object - """ - if options is None: - options = [ - "deepleo", - "enable_debug_commands", - "disable_emoji_spoken_text", - "enablemm", - ] - if conversation_style: - if not isinstance(conversation_style, ConversationStyle): - conversation_style = getattr(ConversationStyle, conversation_style) - options = conversation_style.value - self.struct = { - "arguments": [ - { - "source": "cib", - "optionsSets": options, - "allowedMessageTypes": [ - "Chat", - "Disengaged", - "AdsQuery", - "SemanticSerp", - "GenerateContentQuery", - "SearchQuery", - ], - "sliceIds": [ - "chk1cf", - "nopreloadsscf", - "winlongmsg2tf", - "perfimpcomb", - "sugdivdis", - "sydnoinputt", - "wpcssopt", - "wintone2tf", - "0404sydicnbs0", - "405suggbs0", - "scctl", - "330uaugs0", - "0329resp", - "udscahrfon", - "udstrblm5", - "404e2ewrt", - "408nodedups0", - "403tvlansgnd", - ], - "traceId": _get_ran_hex(32), - "isStartOfSession": self.invocation_id == 0, - "message": { - "author": "user", - "inputMethod": "Keyboard", - "text": prompt, - "messageType": "Chat", - }, - "conversationSignature": self.conversation_signature, - "participant": { - "id": self.client_id, - }, - "conversationId": self.conversation_id, - }, - ], - "invocationId": str(self.invocation_id), - "target": "chat", - "type": 4, - } - if search_result: - have_search_result = [ - "InternalSearchQuery", - "InternalSearchResult", - "InternalLoaderMessage", - "RenderCardRequest", - ] - self.struct["arguments"][0]["allowedMessageTypes"] += have_search_result - if webpage_context: - self.struct["arguments"][0]["previousMessages"] = [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - "messageId": "discover-web--page-ping-mriduna-----", - }, - ] - self.invocation_id += 1 - - -class _Conversation: - """ - Conversation API - """ - - def __init__( - self, - proxy = None, - async_mode = False, - cookies = None, - ) -> None: - if async_mode: - return - self.struct: dict = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - self.session = httpx.Client( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - ) - if cookies: - for cookie in cookies: - self.session.cookies.set(cookie["name"], cookie["value"]) - # Send GET request - response = self.session.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = self.session.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = _Conversation(async_mode=True) - self.struct = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - transport = httpx.AsyncHTTPTransport(retries=10) - # Convert cookie format to httpx format - formatted_cookies = None - if cookies: - formatted_cookies = httpx.Cookies() - for cookie in cookies: - formatted_cookies.set(cookie["name"], cookie["value"]) - async with httpx.AsyncClient( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - transport=transport, - cookies=formatted_cookies, - ) as client: - # Send GET request - response = await client.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = await client.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - return self - - -class _ChatHub: - """ - Chat API - """ - - def __init__( - self, - conversation: _Conversation, - proxy = None, - cookies = None, - ) -> None: - self.session = None - self.wss = None - self.request: _ChatHubRequest - self.loop: bool - self.task: asyncio.Task - self.request = _ChatHubRequest( - conversation_signature=conversation.struct["conversationSignature"], - client_id=conversation.struct["clientId"], - conversation_id=conversation.struct["conversationId"], - ) - self.cookies = cookies - self.proxy: str = proxy - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - req_header = HEADERS - if self.cookies is not None: - ws_cookies = [] - for cookie in self.cookies: - ws_cookies.append(f"{cookie['name']}={cookie['value']}") - req_header.update({ - 'Cookie': ';'.join(ws_cookies), - }) - - timeout = aiohttp.ClientTimeout(total=30) - self.session = aiohttp.ClientSession(timeout=timeout) - - if self.wss and not self.wss.closed: - await self.wss.close() - # Check if websocket is closed - self.wss = await self.session.ws_connect( - wss_link, - headers=req_header, - ssl=ssl_context, - proxy=self.proxy, - autoping=False, - ) - await self._initial_handshake() - if self.request.invocation_id == 0: - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ) - else: - async with httpx.AsyncClient() as client: - response = await client.post( - "https://sydney.bing.com/sydney/UpdateConversation/", - json={ - "messages": [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - }, - ], - "conversationId": self.request.conversation_id, - "source": "cib", - "traceId": _get_ran_hex(32), - "participant": {"id": self.request.client_id}, - "conversationSignature": self.request.conversation_signature, - }, - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Update web page context failed") - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - ) - # Send request - await self.wss.send_str(_append_identifier(self.request.struct)) - final = False - draw = False - resp_txt = "" - result_text = "" - resp_txt_no_link = "" - while not final: - msg = await self.wss.receive() - try: - objects = msg.data.split(DELIMITER) - except : - continue - - for obj in objects: - if obj is None or not obj: - continue - response = json.loads(obj) - if response.get("type") != 2 and raw: - yield False, response - elif response.get("type") == 1 and response["arguments"][0].get( - "messages", - ): - if not draw: - if ( - response["arguments"][0]["messages"][0].get("messageType") - == "GenerateContentQuery" - ): - async with ImageGenAsync("", True) as image_generator: - images = await image_generator.get_images( - response["arguments"][0]["messages"][0]["text"], - ) - for i, image in enumerate(images): - resp_txt = resp_txt + f"\n![image{i}]({image})" - draw = True - if ( - response["arguments"][0]["messages"][0]["contentOrigin"] - != "Apology" - ) and not draw: - resp_txt = result_text + response["arguments"][0][ - "messages" - ][0]["adaptiveCards"][0]["body"][0].get("text", "") - resp_txt_no_link = result_text + response["arguments"][0][ - "messages" - ][0].get("text", "") - if response["arguments"][0]["messages"][0].get( - "messageType", - ): - resp_txt = ( - resp_txt - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - result_text = ( - result_text - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - yield False, resp_txt - - elif response.get("type") == 2: - if response["item"]["result"].get("error"): - await self.close() - raise Exception( - f"{response['item']['result']['value']}: {response['item']['result']['message']}", - ) - if draw: - cache = response["item"]["messages"][1]["adaptiveCards"][0][ - "body" - ][0]["text"] - response["item"]["messages"][1]["adaptiveCards"][0]["body"][0][ - "text" - ] = (cache + resp_txt) - if ( - response["item"]["messages"][-1]["contentOrigin"] == "Apology" - and resp_txt - ): - response["item"]["messages"][-1]["text"] = resp_txt_no_link - response["item"]["messages"][-1]["adaptiveCards"][0]["body"][0][ - "text" - ] = resp_txt - print( - "Preserved the message from being deleted", - file=sys.stderr, - ) - final = True - await self.close() - yield True, response - - async def _initial_handshake(self) -> None: - await self.wss.send_str(_append_identifier({"protocol": "json", "version": 1})) - await self.wss.receive() - - async def close(self) -> None: - """ - Close the connection - """ - if self.wss and not self.wss.closed: - await self.wss.close() - if self.session and not self.session.closed: - await self.session.close() - - -class Chatbot: - """ - Combines everything to make it seamless - """ - - def __init__( - self, - proxy = None, - cookies = None, - ) -> None: - self.proxy = proxy - self.chat_hub: _ChatHub = _ChatHub( - _Conversation(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = Chatbot.__new__(Chatbot) - self.proxy = proxy - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - return self - - async def ask( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> dict: - """ - Ask a question to the bot - """ - async for final, response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - if final: - return response - await self.chat_hub.wss.close() - return {} - - async def ask_stream( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - async for response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - raw=raw, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - yield response - - async def close(self) -> None: - """ - Close the connection - """ - await self.chat_hub.close() - - async def reset(self) -> None: - """ - Reset the conversation - """ - await self.close() - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy), - proxy=self.proxy, - cookies=self.chat_hub.cookies, - ) - - -async def _get_input_async( - session: PromptSession = None, - completer: WordCompleter = None, -) -> str: - """ - Multiline input function. - """ - return await session.prompt_async( - completer=completer, - multiline=True, - auto_suggest=AutoSuggestFromHistory(), - ) - - -def _create_session() -> PromptSession: - kb = KeyBindings() - - @kb.add("enter") - def _(event): - buffer_text = event.current_buffer.text - if buffer_text.startswith("!"): - event.current_buffer.validate_and_handle() - else: - event.current_buffer.insert_text("\n") - - @kb.add("escape") - def _(event): - if event.current_buffer.complete_state: - # event.current_buffer.cancel_completion() - event.current_buffer.text = "" - - return PromptSession(key_bindings=kb, history=InMemoryHistory()) - - -def _create_completer(commands: list, pattern_str: str = "$"): - return WordCompleter(words=commands, pattern=re.compile(pattern_str)) - - -async def async_main(args: argparse.Namespace) -> None: - """ - Main function - """ - print("Initializing...") - print("Enter `alt+enter` or `escape+enter` to send a message") - # Read and parse cookies - cookies = None - if args.cookie_file: - cookies = json.loads(open(args.cookie_file, encoding="utf-8").read()) - bot = await Chatbot.create(proxy=args.proxy, cookies=cookies) - session = _create_session() - completer = _create_completer(["!help", "!exit", "!reset"]) - initial_prompt = args.prompt - - while True: - print("\nYou:") - if initial_prompt: - question = initial_prompt - print(question) - initial_prompt = None - else: - question = ( - input() - if args.enter_once - else await _get_input_async(session=session, completer=completer) - ) - print() - if question == "!exit": - break - if question == "!help": - print( - """ - !help - Show this help message - !exit - Exit the program - !reset - Reset the conversation - """, - ) - continue - if question == "!reset": - await bot.reset() - continue - print("Bot:") - if args.no_stream: - print( - ( - await bot.ask( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ) - )["item"]["messages"][1]["adaptiveCards"][0]["body"][0]["text"], - ) - else: - wrote = 0 - if args.rich: - md = Markdown("") - with Live(md, auto_refresh=False) as live: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if wrote > len(response): - print(md) - print(Markdown("***Bing revoked the response.***")) - wrote = len(response) - md = Markdown(response) - live.update(md, refresh=True) - else: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if not wrote: - print(response, end="", flush=True) - else: - print(response[wrote:], end="", flush=True) - wrote = len(response) - print() - await bot.close() - - -def main() -> None: - print( - """ - EdgeGPT - A demo of reverse engineering the Bing GPT chatbot - Repo: github.com/acheong08/EdgeGPT - By: Antonio Cheong - - !help for help - - Type !exit to exit - """, - ) - parser = argparse.ArgumentParser() - parser.add_argument("--enter-once", action="store_true") - parser.add_argument("--no-stream", action="store_true") - parser.add_argument("--rich", action="store_true") - parser.add_argument( - "--proxy", - help="Proxy URL (e.g. socks5://127.0.0.1:1080)", - type=str, - ) - parser.add_argument( - "--wss-link", - help="WSS URL(e.g. wss://sydney.bing.com/sydney/ChatHub)", - type=str, - default="wss://sydney.bing.com/sydney/ChatHub", - ) - parser.add_argument( - "--style", - choices=["creative", "balanced", "precise"], - default="balanced", - ) - parser.add_argument( - "--prompt", - type=str, - default="", - required=False, - help="prompt to start with", - ) - parser.add_argument( - "--cookie-file", - type=str, - default="", - required=False, - help="path to cookie file", - ) - args = parser.parse_args() - asyncio.run(async_main(args)) - - -class Cookie: - """ - Convenience class for Bing Cookie files, data, and configuration. This Class - is updated dynamically by the Query class to allow cycling through >1 - cookie/credentials file e.g. when daily request limits (current 200 per - account per day) are exceeded. - """ - - current_file_index = 0 - dirpath = Path("./").resolve() - search_pattern = "bing_cookies_*.json" - ignore_files = set() - - @classmethod - def fetch_default(cls, path=None): - from selenium import webdriver - from selenium.webdriver.common.by import By - - driver = webdriver.Edge() - driver.get("https://bing.com/chat") - time.sleep(5) - xpath = '//button[@id="bnp_btn_accept"]' - driver.find_element(By.XPATH, xpath).click() - time.sleep(2) - xpath = '//a[@id="codexPrimaryButton"]' - driver.find_element(By.XPATH, xpath).click() - if path is None: - path = Path("./bing_cookies__default.json") - # Double underscore ensures this file is first when sorted - cookies = driver.get_cookies() - Path(path).write_text(json.dumps(cookies, indent=4), encoding="utf-8") - # Path again in case supplied path is: str - print(f"Cookies saved to: {path}") - driver.quit() - - @classmethod - def files(cls): - """Return a sorted list of all cookie files matching .search_pattern""" - all_files = set(cls.dirpath.glob(cls.search_pattern)) - return sorted(list(all_files - cls.ignore_files)) - - @classmethod - def import_data(cls): - """ - Read the active cookie file and populate the following attributes: - - .current_filepath - .current_data - .image_token - """ - try: - cls.current_filepath = cls.files()[cls.current_file_index] - except IndexError: - print( - "> Please set Cookie.current_filepath to a valid cookie file, then run Cookie.import_data()", - ) - return - print(f"> Importing cookies from: {cls.current_filepath.name}") - with open(cls.current_filepath, encoding="utf-8") as file: - cls.current_data = json.load(file) - cls.image_token = [x for x in cls.current_data if x.get("name") == "_U"] - cls.image_token = cls.image_token[0].get("value") - - @classmethod - def import_next(cls): - """ - Cycle through to the next cookies file. Import it. Mark the previous - file to be ignored for the remainder of the current session. - """ - cls.ignore_files.add(cls.current_filepath) - if Cookie.current_file_index >= len(cls.files()): - Cookie.current_file_index = 0 - Cookie.import_data() - - -class Query: - """ - A convenience class that wraps around EdgeGPT.Chatbot to encapsulate input, - config, and output all together. Relies on Cookie class for authentication - """ - - def __init__( - self, - prompt, - style="precise", - content_type="text", - cookie_file=0, - echo=True, - echo_prompt=False, - ): - """ - Arguments: - - prompt: Text to enter into Bing Chat - style: creative, balanced, or precise - content_type: "text" for Bing Chat; "image" for Dall-e - cookie_file: Path, filepath string, or index (int) to list of cookie paths - echo: Print something to confirm request made - echo_prompt: Print confirmation of the evaluated prompt - """ - self.index = [] - self.request_count = {} - self.image_dirpath = Path("./").resolve() - Cookie.import_data() - self.index += [self] - self.prompt = prompt - files = Cookie.files() - if isinstance(cookie_file, int): - index = cookie_file if cookie_file < len(files) else 0 - else: - if not isinstance(cookie_file, (str, Path)): - message = "'cookie_file' must be an int, str, or Path object" - raise TypeError(message) - cookie_file = Path(cookie_file) - if cookie_file in files(): # Supplied filepath IS in Cookie.dirpath - index = files.index(cookie_file) - else: # Supplied filepath is NOT in Cookie.dirpath - if cookie_file.is_file(): - Cookie.dirpath = cookie_file.parent.resolve() - if cookie_file.is_dir(): - Cookie.dirpath = cookie_file.resolve() - index = 0 - Cookie.current_file_index = index - if content_type == "text": - self.style = style - self.log_and_send_query(echo, echo_prompt) - if content_type == "image": - self.create_image() - - def log_and_send_query(self, echo, echo_prompt): - self.response = asyncio.run(self.send_to_bing(echo, echo_prompt)) - name = str(Cookie.current_filepath.name) - if not self.request_count.get(name): - self.request_count[name] = 1 - else: - self.request_count[name] += 1 - - def create_image(self): - image_generator = ImageGen(Cookie.image_token) - image_generator.save_images( - image_generator.get_images(self.prompt), - output_dir=self.image_dirpath, - ) - - async def send_to_bing(self, echo=True, echo_prompt=False): - """Creat, submit, then close a Chatbot instance. Return the response""" - retries = len(Cookie.files()) - while retries: - try: - bot = await Chatbot.create() - if echo_prompt: - print(f"> {self.prompt=}") - if echo: - print("> Waiting for response...") - if self.style.lower() not in "creative balanced precise".split(): - self.style = "precise" - response = await bot.ask( - prompt=self.prompt, - conversation_style=getattr(ConversationStyle, self.style), - # wss_link="wss://sydney.bing.com/sydney/ChatHub" - # What other values can this parameter take? It seems to be optional - ) - return response - except KeyError: - print( - f"> KeyError [{Cookie.current_filepath.name} may have exceeded the daily limit]", - ) - Cookie.import_next() - retries -= 1 - finally: - await bot.close() - - @property - def output(self): - """The response from a completed Chatbot request""" - return self.response["item"]["messages"][1]["text"] - - @property - def sources(self): - """The source names and details parsed from a completed Chatbot request""" - return self.response["item"]["messages"][1]["sourceAttributions"] - - @property - def sources_dict(self): - """The source names and details as a dictionary""" - sources_dict = {} - name = "providerDisplayName" - url = "seeMoreUrl" - for source in self.sources: - if name in source.keys() and url in source.keys(): - sources_dict[source[name]] = source[url] - else: - continue - return sources_dict - - @property - def code(self): - """Extract and join any snippets of Python code in the response""" - code_blocks = self.output.split("```")[1:-1:2] - code_blocks = ["\n".join(x.splitlines()[1:]) for x in code_blocks] - return "\n\n".join(code_blocks) - - @property - def languages(self): - """Extract all programming languages given in code blocks""" - code_blocks = self.output.split("```")[1:-1:2] - return {x.splitlines()[0] for x in code_blocks} - - @property - def suggestions(self): - """Follow-on questions suggested by the Chatbot""" - return [ - x["text"] - for x in self.response["item"]["messages"][1]["suggestedResponses"] - ] - - def __repr__(self): - return f"" - - def __str__(self): - return self.output - - -class ImageQuery(Query): - def __init__(self, prompt, **kwargs): - kwargs.update({"content_type": "image"}) - super().__init__(prompt, **kwargs) - - def __repr__(self): - return f"" - - -if __name__ == "__main__": - main() diff --git a/spaces/quidiaMuxgu/Expedit-SAM/AVS Audio Editor 9.1.2.540 Crack With Product Key 2020 Download [UPD].md b/spaces/quidiaMuxgu/Expedit-SAM/AVS Audio Editor 9.1.2.540 Crack With Product Key 2020 Download [UPD].md deleted file mode 100644 index 1ef76c1b06db71362599cb5f376a431be5ba5cb4..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/AVS Audio Editor 9.1.2.540 Crack With Product Key 2020 Download [UPD].md +++ /dev/null @@ -1,8 +0,0 @@ - -

    AVS Audio Editor Crack, as shown in the above image, can be used for various aspects of editing videos. Let us see the overall interface. The basic toolbox has basic tools for video editing. You can also enhance the details and trim. Some of the added choices are as such.

    -

    AVS Audio Editor Crack has some incredible tools that include the ability to edit and trim your video clip, remove background noise, improve audio quality, enhance the images, apply multiple audio tracks, and much more. You can also merge your video clips for making a single file and such.

    -

    AVS Audio Editor 9.1.2.540 Crack With Product Key 2020 Download


    Download Zip ————— https://geags.com/2uCsRk



    -

    The decoder of AVS Audio Editor contains a wide range of audio and video parameters to edit the video or Audio clips. You can easily change the bitrate, frame rate, quality, and audio format and duration etc. The tool offers the users with the most easy-to-use interface which enables them to edit the video and audio clips with ease.

    -

    Edraw Max 10 Crack is a user-friendly program that is designed for people who are not extremely skilled in the business and require a simple, effective means to produce simple diagrams. This is a system analysis tool that includes several items that may be used within a product. This program has a superb interface with an intuitive user interface. Therefore, anyone without any previous knowledge of AVS Audio Editor Serial Key can use it quickly. The AVS Audio Editor 9 License Key represents an overall classic in the field of CAD tools. This software is a powerful tool that offers several functions for creating graphics, drawings, and designs. AVS Audio Editor Full Version supports multitasking with fast and easy workflow. Other key features include the ability to create graphics, networking diagrams, and circle diagrams. AVS Audio Editor Keygen is a free software offered by Online Media Technologies. The main focus of the program is to achieve numerous audio creations with ease. You can save audio files in mp3, FLAC, WAV, M4A, WMA, and AAC format. AVS Audio Editor Keygen is a free software offered by Online Media Technologies. The main focus of the program is to achieve various audio creations with ease. It can work on Windows 7, 8. 10, XP, and Vista operating systems.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Auto-Tune Pro V9.1.0 VST VST3 AAX X64 R2R.md b/spaces/quidiaMuxgu/Expedit-SAM/Auto-Tune Pro V9.1.0 VST VST3 AAX X64 R2R.md deleted file mode 100644 index 9b7f5b078f180bed7b252428e76663a6fc712a96..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Auto-Tune Pro V9.1.0 VST VST3 AAX X64 R2R.md +++ /dev/null @@ -1,130 +0,0 @@ -
    -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R - The Ultimate Vocal Production Software

    - -

    If you are looking for a software that can help you to create professional-sounding vocals with pitch correction and effects, you might want to check out Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R. This software is the most complete and advanced edition of Auto-Tune, the world's leading vocal production software.

    - -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R includes both Auto Mode and Graph Mode, which allow you to work in real-time or in detail on your vocal tracks. You can use Auto Mode to quickly correct the pitch and add effects, such as vibrato, formant, and throat modeling. You can also use Graph Mode to edit the pitch and time of your vocals with precision and control.

    -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R


    DOWNLOAD ⚙⚙⚙ https://geags.com/2uCqMP



    - -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R also comes with a bundle of plugins that enhance your vocal production workflow. You can use Auto-Key to automatically detect the key and scale of your music and send it to Auto-Tune Pro. You can use Auto-Tune Access to access the essential features of Auto-Tune with a simple and intuitive interface. You can use Auto-Tune Artist to perform real-time pitch correction and effects on stage or in the studio. You can use Auto-Tune EFX+ to create vocoder-style vocal effects with pitch shifting, distortion, filter, and modulation.

    - -

    How to Download Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R

    - -

    Before you download Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R, you need to make sure that your computer meets the minimum system requirements for the software. Here are the main specifications that you need to check:

    - -
      -
    • Operating system: Windows 8.1 or Windows 10 (64-bit only)
    • -
    • Processor: Intel Core i5 or AMD equivalent with SSE2 technology
    • -
    • Memory: 4 GB RAM (8 GB recommended)
    • -
    • Hard disk: 500 MB free space for installation
    • -
    • Display: 1280 x 800 with true color (1920 x 1080 or higher recommended)
    • -
    • Graphics card: DirectX 10 or DirectX 11 compatible
    • -
    • Internet connection: Required for online activation
    • -
    - -

    Once you have verified that your computer can run Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R smoothly, you can proceed to download the software from one of the links below:

    - -Antares - Auto-Tune Pro v9.1.0 (VST, VST3, AAX) [WiN x64] - Audioplugin.net -Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R - Sample Drive -Auto-Tune Pro v9.1.0 WiN-R2R - MaGeSY ® | MaGeSY® PRO - -

    After downloading the software, you need to extract it using WinRAR or any other file compression software. Then, follow these steps to install and activate Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R:

    - -
      -
    1. Run the setup file of Auto-Tune Pro v9.1.0 and install it as a trial version.
    2. -
    3. When the installation is complete, run the software and click on Activate.
    4. -
    5. Select I have an activation code from Antares and copy the request code.
    6. -
    7. Run X-force keygen as administrator and click on Patch.
    8. -
    9. Paste the request code into the keygen and click on Generate.
    10. -
    11. Copy the activation code from the keygen and paste it into the software.
    12. -
    13. Click on Next and enjoy Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R for free.
    14. -
    - -

    Benefits of Using Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R

    - -

    By using Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R, you can enjoy all the features and benefits of the most complete and advanced edition of Auto-Tune. Here are some of the advantages of using this software:

    - -
      -
    • You can create professional-sounding vocals with pitch correction and effects in any genre or style of music.
    • -
    • You can work in real-time or in detail on your vocal tracks with a redesigned interface and powerful new features.
    • -
    • You can access the classic sound of Auto-Tune 5 with Classic Mode or experiment with new sounds with Flex-Tune and Humanize.
    • -
    • You can integrate with other DAWs and plugins using ARA support or MIDI control.
    • -
    • You can use a bundle of plugins that enhance your vocal production workflow with key detection, essential features, real-time performance, and vocoder effects.
    • -
    - -

    Conclusion

    - -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R is the ultimate vocal production software that offers professional pitch correction and classic effects for your vocals. You can download it for free using X-force keygen, a tool that can generate valid serial numbers and activation codes for Antares products. However, you should be aware that using X-force keygen is illegal and unethical, and may expose you to various risks and consequences. Therefore, we do not recommend or endorse using X-force keygen for any purposes. If you want to use Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R legally and ethically, you should buy a license from Antares or use their free trial version.

    - -

    We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    How to Use Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R

    - -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R is a user-friendly software that has a simple and intuitive interface. You can easily create and modify your vocal tracks using various tools and commands. You can also import and export data from other DAWs and plugins using ARA support or MIDI control.

    -

    - -

    To use Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R, you need to follow these basic steps:

    - -
      -
    1. Start a project and set the preferences. You can choose the units, language, input type, and retune speed for your project. You can also customize the appearance and behavior of the software according to your preferences.
    2. -
    3. Select the operation mode. You can choose between Auto Mode and Graph Mode, depending on your needs and workflow. Auto Mode is ideal for quick and easy pitch correction and effects, while Graph Mode is ideal for precise and detailed editing of pitch and time.
    4. -
    5. Adjust the pitch and time of your vocals. In Auto Mode, you can use the key and scale controls to set the target pitches for your vocals. You can also use the Classic Mode button to access the sound of Auto-Tune 5. In Graph Mode, you can use the tools and functions to draw, move, cut, copy, paste, or delete pitch curves. You can also use the Time Control panel to edit the timing of your vocals.
    6. -
    7. Add effects to your vocals. In Auto Mode, you can use the Formant, Throat Length, Vibrato, Transpose, and Flex-Tune controls to modify the sound and timbre of your vocals. In Graph Mode, you can use the Formant Correction function to preserve or change the formants of your vocals.
    8. -
    9. Create reports and export your vocals. You can use the Report Generator function to create detailed reports of your pitch correction settings and results. You can also export your vocals as audio files or MIDI files.
    10. -
    - -

    Why Choose Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R

    - -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R is a powerful and reliable software that can help you to create professional-sounding vocals with pitch correction and effects. Here are some of the reasons why you should choose Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R for your vocal production projects:

    - -
      -
    • It supports BIM-integrated workflows that allow you to exchange data with other DAWs and plugins seamlessly.
    • -
    • It offers extensive pitch correction capabilities that enable you to correct the pitch of static or dynamic vocals in any genre or style of music.
    • -
    • It provides advanced time editing tools that allow you to adjust the timing of your vocals with accuracy and flexibility.
    • -
    • It works with country-specific scales and tunings to support different musical traditions and cultures.
    • -
    • It has an open and flexible API that allows you to create custom scales, tunings, presets, or functions.
    • -
    - -

    Conclusion

    - -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R is a comprehensive solution that allows you to create professional-sounding vocals with pitch correction and effects. You can download it for free using X-force keygen, a tool that can generate valid serial numbers and activation codes for Antares products. However, you should be aware that using X-force keygen is illegal and unethical, and may expose you to various risks and consequences. Therefore, we do not recommend or endorse using X-force keygen for any purposes. If you want to use Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R legally and ethically, you should buy a license from Antares or use their free trial version.

    - -

    We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    How X-force Keygen Works

    - -

    X-force keygen is a tool that can generate valid serial numbers and activation codes for Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R and other Antares products. It works by reverse engineering the code that the software uses to verify the license and creating a new code that bypasses the verification process. This way, you can activate your Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R without having to purchase a license or use a trial version.

    - -

    To use X-force keygen, you need to follow these steps:

    - -
      -
    1. Download X-force keygen from one of the links provided in this article and extract it using WinRAR or any other file compression software.
    2. -
    3. Run the setup file of Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R and install it as a trial version.
    4. -
    5. When the installation is complete, run the software and click on Activate.
    6. -
    7. Select I have an activation code from Antares and copy the request code.
    8. -
    9. Run X-force keygen as administrator and click on Patch.
    10. -
    11. Paste the request code into the keygen and click on Generate.
    12. -
    13. Copy the activation code from the keygen and paste it into the software.
    14. -
    15. Click on Next and enjoy Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R for free.
    16. -
    - -

    Risks of Using X-force Keygen

    - -

    While X-force keygen may seem like a convenient and easy way to download Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R for free, you should be aware of the risks and consequences of using it. Here are some of the possible drawbacks of using X-force keygen:

    - -
      -
    • It is illegal and unethical. Using X-force keygen is a form of software piracy, which is a criminal offense in many countries. You are violating the terms of service of Antares and depriving them of their rightful revenue. You may face legal actions or penalties if you are caught using X-force keygen.
    • -
    • It is unsafe and unreliable. X-force keygen may contain viruses, malware, or spyware that can harm your computer or steal your personal information. You may also encounter errors, bugs, or crashes when using your Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R. You may not be able to access updates, support, or online features that are available for licensed users.
    • -
    • It is unethical and unfair. Using X-force keygen is a form of cheating, which is unfair to other users who pay for their licenses or use their trial versions. You are also disrespecting the hard work and creativity of the developers who create these software products.
    • -
    - -

    Conclusion

    - -

    X-force keygen is a tool that can help you download Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R for free and use it without any limitations. However, you should be aware that using X-force keygen is illegal and unethical, and may expose you to various risks and consequences. Therefore, we do not recommend or endorse using X-force keygen for any purposes. If you want to use Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R legally and ethically, you should buy a license from Antares or use their free trial version.

    - -

    We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    Conclusion

    - -

    Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R is a comprehensive solution that allows you to create professional-sounding vocals with pitch correction and effects. You can download it for free using X-force keygen, a tool that can generate valid serial numbers and activation codes for Antares products. However, you should be aware that using X-force keygen is illegal and unethical, and may expose you to various risks and consequences. Therefore, we do not recommend or endorse using X-force keygen for any purposes. If you want to use Auto-Tune Pro v9.1.0 VST VST3 AAX x64 R2R legally and ethically, you should buy a license from Antares or use their free trial version.

    - -

    We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/train/mel_processing.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/train/mel_processing.py deleted file mode 100644 index 03330d247aea554c9e87d497e8e969305772afab..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/train/mel_processing.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch -from librosa.filters import mel as librosa_mel_fn -import logging - -logger = logging.getLogger(__name__) - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - return dynamic_range_compression_torch(magnitudes) - - -def spectral_de_normalize_torch(magnitudes): - return dynamic_range_decompression_torch(magnitudes) - - -# Reusable banks -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - """Convert waveform into Linear-frequency Linear-amplitude spectrogram. - - Args: - y :: (B, T) - Audio waveforms - n_fft - sampling_rate - hop_size - win_size - center - Returns: - :: (B, Freq, Frame) - Linear-frequency Linear-amplitude spectrogram - """ - # Validation - if torch.min(y) < -1.07: - logger.debug("min value is %s", str(torch.min(y))) - if torch.max(y) > 1.07: - logger.debug("max value is %s", str(torch.max(y))) - - # Window - Cache if needed - global hann_window - dtype_device = str(y.dtype) + "_" + str(y.device) - wnsize_dtype_device = str(win_size) + "_" + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to( - dtype=y.dtype, device=y.device - ) - - # Padding - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - # Complex Spectrogram :: (B, T) -> (B, Freq, Frame, RealComplex=2) - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[wnsize_dtype_device], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - return_complex=False, - ) - - # Linear-frequency Linear-amplitude spectrogram :: (B, Freq, Frame, RealComplex=2) -> (B, Freq, Frame) - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - # MelBasis - Cache if needed - global mel_basis - dtype_device = str(spec.dtype) + "_" + str(spec.device) - fmax_dtype_device = str(fmax) + "_" + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn( - sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax - ) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to( - dtype=spec.dtype, device=spec.device - ) - - # Mel-frequency Log-amplitude spectrogram :: (B, Freq=num_mels, Frame) - melspec = torch.matmul(mel_basis[fmax_dtype_device], spec) - melspec = spectral_normalize_torch(melspec) - return melspec - - -def mel_spectrogram_torch( - y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False -): - """Convert waveform into Mel-frequency Log-amplitude spectrogram. - - Args: - y :: (B, T) - Waveforms - Returns: - melspec :: (B, Freq, Frame) - Mel-frequency Log-amplitude spectrogram - """ - # Linear-frequency Linear-amplitude spectrogram :: (B, T) -> (B, Freq, Frame) - spec = spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center) - - # Mel-frequency Log-amplitude spectrogram :: (B, Freq, Frame) -> (B, Freq=num_mels, Frame) - melspec = spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax) - - return melspec diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/preprocess.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/preprocess.py deleted file mode 100644 index 784f46e0bf28f536f381356c117904dda9934e6f..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/preprocess.py +++ /dev/null @@ -1,346 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch - -from lib.infer.infer_libs.uvr5_pack.lib_v5 import nets_61968KB as Nets -from lib.infer.infer_libs.uvr5_pack.lib_v5 import spec_utils -from lib.infer.infer_libs.uvr5_pack.lib_v5.model_param_init import ModelParameters -from lib.infer.infer_libs.uvr5_pack.lib_v5.nets_new import CascadedNet -from lib.infer.infer_libs.uvr5_pack.utils import inference - - -class AudioPre: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/infer/infer_libs/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = Nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class AudioPreDeEcho: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/infer/infer_libs/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) diff --git a/spaces/r3gm/RVC_HF/tools/torchgate/__init__.py b/spaces/r3gm/RVC_HF/tools/torchgate/__init__.py deleted file mode 100644 index b4a12675828dceb6e6270f9439cdf98ea28ea96d..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/tools/torchgate/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -""" -TorchGating is a PyTorch-based implementation of Spectral Gating -================================================ -Author: Asaf Zorea - -Contents --------- -torchgate imports all the functions from PyTorch, and in addition provides: - TorchGating --- A PyTorch module that applies a spectral gate to an input signal - -""" -from .torchgate import TorchGate diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Cla-Vocals-Plugin-Crack-PORTABLEl.md b/spaces/raedeXanto/academic-chatgpt-beta/Cla-Vocals-Plugin-Crack-PORTABLEl.md deleted file mode 100644 index 2415800e64c747149f7a027518199e15ac05dff5..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Cla-Vocals-Plugin-Crack-PORTABLEl.md +++ /dev/null @@ -1,44 +0,0 @@ -## Cla Vocals Plugin Crackl - - - - - - - - - -**LINK ➡ [https://ekporriola.blogspot.com/?c=2tAnTg](https://ekporriola.blogspot.com/?c=2tAnTg)** - - - - - - - - - - - - - -Daffy Duck, an anthropomorphic black duck who is usually a foil for Bugs Bunny or Porky Pig, has developed a bad habit of buying things online without paying attention to the price. He doesn't realize that his credit card is linked to Bugs Bunny's bank account, and he is spending Bugs' money on useless items. When Bugs finds out, he is furious and forces Daffy to join a group therapy session led by Yosemite Sam, a gruff cowboy with a short temper. There, Daffy meets other shopaholics like Tina Russo, his girlfriend, and Porky Pig, his best friend. Daffy learns to control his impulses and stop shopping online for a while. Meanwhile, Bugs becomes obsessed with a video game called "Gribbler's Quest IV: Elves vs. Fairies vs. Griblets.", a fantasy adventure game where he can create his own character and explore a magical world. He spends hours playing the game and neglects his other responsibilities. Daffy notices that Bugs is addicted to the game and confronts him about it. Bugs admits that he plays the game because he enjoys it and it makes him happy. Daffy realizes that he feels the same way about online shopping and decides to quit the group therapy and resume his old habit. He tells Tina and Porky that he loves shopping online and he doesn't want to change. They are shocked and disappointed by his decision, but Daffy doesn't care. He goes back to his computer and orders more stuff with Bugs' money. - - - -The next day, Bugs receives a package from the online store. He opens it and finds a bunch of items that he didn't order, such as a pink tutu, a rubber duck, and a cheese grater. He realizes that Daffy has been shopping online again with his money and he is furious. He confronts Daffy and demands that he return the items and pay him back. Daffy refuses and says that he has a right to shop online as much as he wants. He accuses Bugs of being a hypocrite for playing video games all day. They start to argue and fight over the package. - - - -Meanwhile, Tina and Porky are worried about Daffy's relapse. They decide to visit him and try to convince him to come back to the group therapy. They arrive at his house and see him and Bugs fighting over the package. They try to intervene and calm them down, but they end up getting involved in the scuffle. The package falls to the ground and breaks open, spilling its contents everywhere. Tina picks up the pink tutu and asks Daffy why he bought it. Daffy says that he thought it was cute and he wanted to try it on. Tina is disgusted and throws the tutu at him. Porky picks up the cheese grater and asks Daffy why he bought it. Daffy says that he thought it was a musical instrument and he wanted to play it. Porky is confused and drops the cheese grater on his foot. Yosemite Sam arrives at the scene and sees the chaos. He yells at them to stop fighting and get back to the group therapy. He grabs Daffy by the neck and drags him away. Tina and Porky follow them, leaving Bugs alone with the mess. - - - -Bugs sighs and starts to clean up the mess. He sees the rubber duck and picks it up. He remembers that he used to play with a rubber duck when he was a kid and he had a lot of fun with it. He wonders if he has been missing out on the simple joys of life by playing video games all the time. He decides to give the rubber duck a chance and squeezes it. It makes a squeaky sound that makes him smile. He squeezes it again and laughs. He feels a surge of happiness and decides to quit playing video games for a while and enjoy other things in life. He puts the rubber duck in his pocket and goes outside to look for some friends to play with. - - 145887f19f - - - - - diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dfx For Vlc Player Full Crack Free Download What You Need to Know.md b/spaces/raedeXanto/academic-chatgpt-beta/Dfx For Vlc Player Full Crack Free Download What You Need to Know.md deleted file mode 100644 index e83c98cf82a2a50885e7cb4d1025e60da8693e30..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dfx For Vlc Player Full Crack Free Download What You Need to Know.md +++ /dev/null @@ -1,209 +0,0 @@ -
    -

    Dfx for Vlc Player Full Crack Free Download

    -

    Do you want to enjoy high-quality sound on your computer? Do you want to enhance the audio of your favorite websites, music, videos, games, and other programs? Do you want to use the best open source media player with the best sound enhancer? If you answered yes to any of these questions, then you need Dfx for Vlc Player.

    -

    Dfx is a powerful audio enhancer that works with most media players and web browsers. It can boost your sound quality, volume, bass, and surround sound effects. It can also let you discover new music, access music videos, view song lyrics, and more. But what if you want to use it with Vlc Player, the most popular and versatile media player that can play almost any format of video and audio files? Well, you're in luck because in this article, we will show you how to download and install Dfx for Vlc Player Full Crack for free. You will also learn how to use it and what are its pros and cons.

    -

    Dfx For Vlc Player Full Crack Free Download


    Download File ->>> https://tinourl.com/2uL1ZB



    -

    How Dfx Enhances Your Audio Experience

    -

    Easy to Use and Customizable

    -

    One of the best features of Dfx is that it is very easy to use and customize. You don't need any technical skills or knowledge to enjoy its benefits. All you have to do is install it and start playing music, videos, games, or anything else with sound. That's it! You will immediately notice the difference in sound clarity and punch that your PC provides.

    -

    But if you want to fine-tune your audio settings, you can also do that. Dfx has an intuitive interface with tooltips to guide you, along with a complete list of factory-tuned presets for all kinds of music. You can even create and save your own custom preset.

    -

    Advanced DSP Sound Quality Enhancement Effects

    -

    The most important aspect of any audio program is how well it sounds. And Dfx does not disappoint in this regard. It uses advanced DSP (Digital Signal Processing) technology to enhance your sound quality with 3D surround sound, higher fidelity, booming bass, and more. It can make your speakers sound like a professional stereo system.

    -

    Dfx also has some unique effects that you won't find in other audio enhancers. For example, it has a Dynamic Boost feature that increases the perceived loudness of your audio while minimizing distortion. It also has an Ambience feature that adds natural reverb and spatial effects to your sound.

    -

    Dfx audio enhancer for vlc player cracked version download
    -How to install dfx for vlc player full crack on windows 10
    -Dfx for vlc player full crack free download with serial key
    -Dfx for vlc player full crack torrent download link
    -Dfx for vlc player full crack activation code generator
    -Dfx for vlc player full crack license key free
    -Dfx for vlc player full crack patch file download
    -Dfx for vlc player full crack keygen download
    -Dfx for vlc player full crack registration code free
    -Dfx for vlc player full crack product key free
    -Dfx for vlc player full crack review and features
    -Dfx for vlc player full crack system requirements and compatibility
    -Dfx for vlc player full crack comparison with other audio enhancers
    -Dfx for vlc player full crack pros and cons
    -Dfx for vlc player full crack alternatives and similar software
    -Dfx for vlc player full crack troubleshooting and support
    -Dfx for vlc player full crack update and upgrade
    -Dfx for vlc player full crack uninstall and remove
    -Dfx for vlc player full crack benefits and advantages
    -Dfx for vlc player full crack drawbacks and disadvantages
    -Dfx for vlc player full crack testimonials and feedback
    -Dfx for vlc player full crack ratings and rankings
    -Dfx for vlc player full crack discounts and coupons
    -Dfx for vlc player full crack offers and deals
    -Dfx for vlc player full crack price and cost
    -Dfx for vlc player full crack free trial and demo
    -Dfx for vlc player full crack download link and source
    -Dfx for vlc player full crack installation guide and instructions
    -Dfx for vlc player full crack usage guide and tips
    -Dfx for vlc player full crack customization and settings
    -Dfx for vlc player full crack performance and quality
    -Dfx for vlc player full crack compatibility and integration with other media players
    -Dfx for vlc player full crack modes and presets
    -Dfx for vlc player full crack skins and themes
    -Dfx for vlc player full crack effects and features
    -Dfx for vlc player full crack sound quality and improvement
    -Dfx for vlc player full crack volume boost and enhancement
    -Dfx for vlc player full crack bass boost and enhancement
    -Dfx for vlc player full crack treble boost and enhancement
    -Dfx for vlc player full crack 3d surround sound and enhancement
    -Dfx for vlc player full crack dynamic boost and enhancement
    -Dfx for vlc player full crack fidelity boost and enhancement
    -Dfx for vlc player full crack ambience boost and enhancement
    -Dfx for vlc player full crack spectrum analyzer and enhancement
    -Dfx for vlc player full crack headphones optimization and enhancement
    -Dfx for vlc player full crack speakers optimization and enhancement
    -How to get dfx for vlc player full crack free download legally
    -Is dfx for vlc player full crack safe to download

    -

    System-Wide Enhancement

    -

    Another great feature of Dfx is that it works system-wide. That means it enhances the sound of all the websites you visit and all the programs you use on your PC. You don't need to switch between different audio enhancers for different applications.

    -

    Some of the websites and programs that Dfx supports are:

    -
      -
    • YouTube
    • -
    • Pandora
    • -
    • Netflix
    • -
    • Hulu
    • -
    • CNN
    • -
    • ESPN
    • -
    • Last.fm
    • -
    • Vimeo
    • -
    • Dailymotion
    • -
    • Facebook
    • -
    • Yahoo!
    • -
    • Google+
    • -
    • Amazon.com
    • -
    • Google Play
    • -
    • iTunes Movie Trailers
    • -
    • TuneIn
    • -
    • Grooveshark
    • -
    • MOG
    • -
    • VEVO
    • -
    • Rdio
    • -
    • Rhapsody
    • -
    • Flickr
    • -
    • Deezer
    • -
    • Slacker Radio
    • -
    • Jango
    • -
    • SoundCloud
    • -
    • Songza
    • -
    • Turntable.fm
    • -
    • iHeartRadio
    • -
    • SiriusXM Internet Radio
    • - And many more!

      -

      Some of the games that Dfx supports are:

      -
        -
      • Zynga
      • -
      • Angry Birds
      • -
      • Yahoo! Games
      • -
      • Games.com
      • -
      • Addicting Games
      • -
      • Miniclip
      • -
      • Nickelodeon Games
      • -
      • Shockwave.com
      • -
      • PBS Kids Games
      • -
      • Pogo.com
      • -
      • MSN Games
      • -
      • Disney Games
      • -
      • Cartoon Network Games
      • -
      • World of Warcraft
      • -
      • Guild Wars
      • -
      • Diablo
      • -
      • The Sims
      • -
      • SimCity
      • -
      • Starcraft
      • -
      • Madden NFL And many more!

        -

        How to Download and Install Dfx for Vlc Player Full Crack

        -

        Download Link and Instructions

        -

        If you want to download and install Dfx for Vlc Player Full Crack for free, you can follow these simple steps:

        -
          -
        1. Click here (^1^)to go to the download page of Dfx Audio Enhancer.

          Note: This is a safe and reliable website that offers free downloads of various software.

          You can also scan the file with your antivirus program before installing it.

          If you have any problems with the download link, please let me know.

          I will try to find another source for you.

          Select the version (12.023)that matches your Windows operating system (64-bit or 32-bit).

          Click here (^1^)to go to the download page of Dfx Audio Enhancer.

          Rock/Pop
        2. -
        3. Classical
        4. -
        5. Jazz
        6. -
        7. Blues
        8. -
        9. Country
        10. -
        11. Reggae
        12. -
        13. Techno
        14. -
        15. Dance
        16. -
        17. Headphones
        18. -
        19. Speech
        20. -And many more!

          -

          You can also create your own preset by adjusting the sliders and knobs on the interface. You can change the Fidelity, Ambience, 3D Surround, Dynamic Boost, and HyperBass levels. You can also adjust the EQ bands and the Preamp level. You can save your custom preset by clicking on the Save button next to the drop-down menu.

          -

          Adjusting the EQ and Other Settings

          -

          Dfx Audio Enhancer also has a built-in EQ (Equalizer) that you can use to fine-tune your sound. You can access it by clicking on the EQ button at the bottom left corner of the interface. You can choose from several EQ presets or create your own by dragging the sliders. You can also adjust the Preamp level and the Bandwidth.

          -

          Besides the EQ, you can also access other settings by clicking on the Settings button at the bottom right corner of the interface. You can change the Skin, Language, Output Mode, and other options. You can also enable or disable some features such as Explore, Lyrics, Artist Bio, Similar Artists, and News.

          -

          Pros and Cons of Dfx for Vlc Player Full Crack

          -

          Pros

          -

          Dfx for Vlc Player Full Crack has many advantages that make it worth downloading and installing. Some of them are:

          -
            -
          • It enhances your sound quality with 3D surround sound, higher fidelity, booming bass, and more.
          • -
          • It works system-wide and supports most websites and programs.
          • -
          • It is easy to use and customize with presets and settings.
          • -
          • It has some unique features such as Dynamic Boost, Ambience, Explore, Lyrics, and more.
          • -
          • It is free to download and activate with a crack.
          • -
          -

          Cons

          -

          However, Dfx for Vlc Player Full Crack also has some drawbacks that you should be aware of. Some of them are:

          -
            -
          • It may not be compatible with some media players or web browsers.
          • -
          • It may cause some performance issues or conflicts with other audio enhancers or drivers.
          • -
          • It may not be legal or ethical to use a cracked version of a paid software.
          • -
          • It may not be safe or secure to download a crack file from an unknown source.
          • -
          • It may not be updated or supported by the developers anymore.
          • -
          -

          Conclusion

          -

          In conclusion, Dfx for Vlc Player Full Crack is a powerful audio enhancer that can improve your sound quality and experience on your computer. It works with most websites and programs and has many features and options to customize your sound. It is also free to download and activate with a crack. However, it also has some drawbacks such as compatibility issues, performance issues, legal issues, security issues, and lack of updates. Therefore, you should weigh the pros and cons before deciding to use it.

          -

          If you want to try Dfx for Vlc Player Full Crack for yourself, you can follow the steps we have provided in this article. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

          -

          Frequently Asked Questions

          -
            - < - < - < - < - < - < - < - < - < - < - < - < - < - < - li>

            What is Dfx Audio Enhancer?

            Dfx Audio Enhancer is a software that enhances your sound quality with 3D surround sound, higher fidelity, booming bass, and more. It works with most media players and web browsers and has many features and options to customize your sound.

            What is Vlc Player?

            -

            Vlc Player is a free and open source cross-platform multimedia player and framework that can play most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols. It is developed by the VideoLAN project and has been downloaded over 3 billion times. It has many features and options to play, convert, stream, and record media files.

            -
          1. How can I download Vlc Player?

            -

            You can download Vlc Player from its official website: https://www.videolan.org/vlc/(^3^). You can choose the version that matches your operating system and language. You can also find other downloads such as skins, extensions, codecs, and plugins on the website.

          2. -
          3. Is Dfx for Vlc Player Full Crack safe to use?

            -

            There is no definitive answer to this question. Using a cracked version of a paid software may not be legal or ethical in some countries or regions. It may also not be safe or secure to download a crack file from an unknown source. It may contain viruses, malware, spyware, or other harmful programs that can damage your computer or steal your personal information. Therefore, you should use Dfx for Vlc Player Full Crack at your own risk and discretion.

          4. -
          5. Are there any alternatives to Dfx for Vlc Player Full Crack?

            -

            Yes, there are some alternatives to Dfx for Vlc Player Full Crack that you can try. Some of them are:

            -
              -
            • FxSound: This is the new and improved version of Dfx Audio Enhancer. It has better sound quality and new features. Plus, it's free.
            • -
            • Equalizer Pro: This is another audio enhancer that can boost your sound quality with 10 band equalizer, bass boost, volume boost, and more. It has a simple and user-friendly interface.
            • -
            • Boom 3D: This is a 3D surround sound enhancer that can create a realistic and immersive sound experience on your PC. It has a sleek and modern interface with various presets and effects.
            • -
            -
          6. -
          7. How can I contact you if I have any questions or feedback?

            -

            If you have any questions or feedback about this article or Dfx for Vlc Player Full Crack, you can leave a comment below or send me an email at [email protected] I will try to reply as soon as possible. Thank you for reading!

          8. -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/rafaelpadilla/coco_metrics/coco_metrics/utils.py b/spaces/rafaelpadilla/coco_metrics/coco_metrics/utils.py deleted file mode 100644 index c5250120f45eff182ec863c215f1bd55f770acb9..0000000000000000000000000000000000000000 --- a/spaces/rafaelpadilla/coco_metrics/coco_metrics/utils.py +++ /dev/null @@ -1,156 +0,0 @@ -import copy -import pickle -from typing import Dict, List, Tuple, Union -from tqdm import tqdm -import numpy as np -import torch -import torch.distributed as dist -from datasets import Dataset - -from coco_metrics.pycocotools.cocoeval import COCOeval - -# Typings -_TYPING_BOX = Tuple[float, float, float, float] -_TYPING_SCORES = List[float] -_TYPING_LABELS = List[int] -_TYPING_BOXES = List[_TYPING_BOX] -_TYPING_PRED_REF = Union[_TYPING_SCORES, _TYPING_LABELS, _TYPING_BOXES] -_TYPING_PREDICTION = Dict[str, _TYPING_PRED_REF] -_TYPING_REFERENCE = Dict[str, _TYPING_PRED_REF] -_TYPING_PREDICTIONS = Dict[int, _TYPING_PREDICTION] - -def convert_to_xywh(boxes: torch.Tensor) -> torch.Tensor: - """ - Convert bounding boxes from (xmin, ymin, xmax, ymax) format to (x, y, width, height) format. - - Args: - boxes (torch.Tensor): Tensor of shape (N, 4) representing bounding boxes in \ - (xmin, ymin, xmax, ymax) format. - - Returns: - torch.Tensor: Tensor of shape (N, 4) representing bounding boxes in (x, y, width, height) \ - format. - """ - xmin, ymin, xmax, ymax = boxes.unbind(1) - return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1) - - -def create_common_coco_eval( - coco_eval: COCOeval, img_ids: List[int], eval_imgs: np.ndarray -) -> None: - """ - Create a common COCO evaluation by merging image IDs and evaluation images into the \ - coco_eval object. - - Args: - coco_eval: COCOeval evaluation object. - img_ids (List[int]): Tensor of image IDs. - eval_imgs (torch.Tensor): Tensor of evaluation images. - """ - img_ids, eval_imgs = merge(img_ids, eval_imgs) - img_ids = list(img_ids) - eval_imgs = list(eval_imgs.flatten()) - - coco_eval.evalImgs = eval_imgs - coco_eval.params.imgIds = img_ids - coco_eval._paramsEval = copy.deepcopy(coco_eval.params) - - -def merge(img_ids: List[int], eval_imgs: np.ndarray) -> Tuple[np.ndarray, np.ndarray]: - """ - Merge image IDs and evaluation images from different processes. - - Args: - img_ids (List[int]): List of image ID arrays from different processes. - eval_imgs (np.ndarray): Evaluation images from different processes. - - Returns: - Tuple[np.ndarray, np.ndarray]: Merged image IDs and evaluation images. - """ - all_img_ids = all_gather(img_ids) - all_eval_imgs = all_gather(eval_imgs) - - merged_img_ids = [] - for p in all_img_ids: - merged_img_ids.extend(p) - - merged_eval_imgs = [] - for p in all_eval_imgs: - merged_eval_imgs.append(p) - - merged_img_ids = np.array(merged_img_ids) - merged_eval_imgs = np.concatenate(merged_eval_imgs, 2) - - # keep only unique (and in sorted order) images - merged_img_ids, idx = np.unique(merged_img_ids, return_index=True) - merged_eval_imgs = merged_eval_imgs[..., idx] - - return merged_img_ids, merged_eval_imgs - - -def all_gather(data: List[int]) -> List[List[int]]: - """ - Run all_gather on arbitrary picklable data (not necessarily tensors). - - Args: - data (List[int]): any picklable object - Returns: - List[List[int]]: list of data gathered from each rank - """ - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device="cuda") - size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda")) - if local_size != max_size: - padding = torch.empty( - size=(max_size - local_size,), dtype=torch.uint8, device="cuda" - ) - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def get_world_size() -> int: - """ - Get the number of processes in the distributed environment. - - Returns: - int: Number of processes. - """ - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def is_dist_avail_and_initialized() -> bool: - """ - Check if distributed environment is available and initialized. - - Returns: - bool: True if distributed environment is available and initialized, False otherwise. - """ - return dist.is_available() and dist.is_initialized() diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/dns/promises.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/dns/promises.d.ts deleted file mode 100644 index 77cd807bd501b5a4d8687ed604989f6c2c252f2e..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/dns/promises.d.ts +++ /dev/null @@ -1,370 +0,0 @@ -/** - * The `dns.promises` API provides an alternative set of asynchronous DNS methods - * that return `Promise` objects rather than using callbacks. The API is accessible - * via `require('dns').promises` or `require('dns/promises')`. - * @since v10.6.0 - */ -declare module 'dns/promises' { - import { - LookupAddress, - LookupOneOptions, - LookupAllOptions, - LookupOptions, - AnyRecord, - CaaRecord, - MxRecord, - NaptrRecord, - SoaRecord, - SrvRecord, - ResolveWithTtlOptions, - RecordWithTtl, - ResolveOptions, - ResolverOptions, - } from 'node:dns'; - /** - * Returns an array of IP address strings, formatted according to [RFC 5952](https://tools.ietf.org/html/rfc5952#section-6), - * that are currently configured for DNS resolution. A string will include a port - * section if a custom port is used. - * - * ```js - * [ - * '4.4.4.4', - * '2001:4860:4860::8888', - * '4.4.4.4:1053', - * '[2001:4860:4860::8888]:1053', - * ] - * ``` - * @since v10.6.0 - */ - function getServers(): string[]; - /** - * Resolves a host name (e.g. `'nodejs.org'`) into the first found A (IPv4) or - * AAAA (IPv6) record. All `option` properties are optional. If `options` is an - * integer, then it must be `4` or `6` – if `options` is not provided, then IPv4 - * and IPv6 addresses are both returned if found. - * - * With the `all` option set to `true`, the `Promise` is resolved with `addresses`being an array of objects with the properties `address` and `family`. - * - * On error, the `Promise` is rejected with an `Error` object, where `err.code`is the error code. - * Keep in mind that `err.code` will be set to `'ENOTFOUND'` not only when - * the host name does not exist but also when the lookup fails in other ways - * such as no available file descriptors. - * - * `dnsPromises.lookup()` does not necessarily have anything to do with the DNS - * protocol. The implementation uses an operating system facility that can - * associate names with addresses, and vice versa. This implementation can have - * subtle but important consequences on the behavior of any Node.js program. Please - * take some time to consult the `Implementation considerations section` before - * using `dnsPromises.lookup()`. - * - * Example usage: - * - * ```js - * const dns = require('dns'); - * const dnsPromises = dns.promises; - * const options = { - * family: 6, - * hints: dns.ADDRCONFIG | dns.V4MAPPED, - * }; - * - * dnsPromises.lookup('example.com', options).then((result) => { - * console.log('address: %j family: IPv%s', result.address, result.family); - * // address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6 - * }); - * - * // When options.all is true, the result will be an Array. - * options.all = true; - * dnsPromises.lookup('example.com', options).then((result) => { - * console.log('addresses: %j', result); - * // addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","family":6}] - * }); - * ``` - * @since v10.6.0 - */ - function lookup(hostname: string, family: number): Promise; - function lookup(hostname: string, options: LookupOneOptions): Promise; - function lookup(hostname: string, options: LookupAllOptions): Promise; - function lookup(hostname: string, options: LookupOptions): Promise; - function lookup(hostname: string): Promise; - /** - * Resolves the given `address` and `port` into a host name and service using - * the operating system's underlying `getnameinfo` implementation. - * - * If `address` is not a valid IP address, a `TypeError` will be thrown. - * The `port` will be coerced to a number. If it is not a legal port, a `TypeError`will be thrown. - * - * On error, the `Promise` is rejected with an `Error` object, where `err.code`is the error code. - * - * ```js - * const dnsPromises = require('dns').promises; - * dnsPromises.lookupService('127.0.0.1', 22).then((result) => { - * console.log(result.hostname, result.service); - * // Prints: localhost ssh - * }); - * ``` - * @since v10.6.0 - */ - function lookupService( - address: string, - port: number - ): Promise<{ - hostname: string; - service: string; - }>; - /** - * Uses the DNS protocol to resolve a host name (e.g. `'nodejs.org'`) into an array - * of the resource records. When successful, the `Promise` is resolved with an - * array of resource records. The type and structure of individual results vary - * based on `rrtype`: - * - * - * - * On error, the `Promise` is rejected with an `Error` object, where `err.code`is one of the `DNS error codes`. - * @since v10.6.0 - * @param hostname Host name to resolve. - * @param [rrtype='A'] Resource record type. - */ - function resolve(hostname: string): Promise; - function resolve(hostname: string, rrtype: 'A'): Promise; - function resolve(hostname: string, rrtype: 'AAAA'): Promise; - function resolve(hostname: string, rrtype: 'ANY'): Promise; - function resolve(hostname: string, rrtype: 'CAA'): Promise; - function resolve(hostname: string, rrtype: 'CNAME'): Promise; - function resolve(hostname: string, rrtype: 'MX'): Promise; - function resolve(hostname: string, rrtype: 'NAPTR'): Promise; - function resolve(hostname: string, rrtype: 'NS'): Promise; - function resolve(hostname: string, rrtype: 'PTR'): Promise; - function resolve(hostname: string, rrtype: 'SOA'): Promise; - function resolve(hostname: string, rrtype: 'SRV'): Promise; - function resolve(hostname: string, rrtype: 'TXT'): Promise; - function resolve(hostname: string, rrtype: string): Promise; - /** - * Uses the DNS protocol to resolve IPv4 addresses (`A` records) for the`hostname`. On success, the `Promise` is resolved with an array of IPv4 - * addresses (e.g. `['74.125.79.104', '74.125.79.105', '74.125.79.106']`). - * @since v10.6.0 - * @param hostname Host name to resolve. - */ - function resolve4(hostname: string): Promise; - function resolve4(hostname: string, options: ResolveWithTtlOptions): Promise; - function resolve4(hostname: string, options: ResolveOptions): Promise; - /** - * Uses the DNS protocol to resolve IPv6 addresses (`AAAA` records) for the`hostname`. On success, the `Promise` is resolved with an array of IPv6 - * addresses. - * @since v10.6.0 - * @param hostname Host name to resolve. - */ - function resolve6(hostname: string): Promise; - function resolve6(hostname: string, options: ResolveWithTtlOptions): Promise; - function resolve6(hostname: string, options: ResolveOptions): Promise; - /** - * Uses the DNS protocol to resolve all records (also known as `ANY` or `*` query). - * On success, the `Promise` is resolved with an array containing various types of - * records. Each object has a property `type` that indicates the type of the - * current record. And depending on the `type`, additional properties will be - * present on the object: - * - * - * - * Here is an example of the result object: - * - * ```js - * [ { type: 'A', address: '127.0.0.1', ttl: 299 }, - * { type: 'CNAME', value: 'example.com' }, - * { type: 'MX', exchange: 'alt4.aspmx.l.example.com', priority: 50 }, - * { type: 'NS', value: 'ns1.example.com' }, - * { type: 'TXT', entries: [ 'v=spf1 include:_spf.example.com ~all' ] }, - * { type: 'SOA', - * nsname: 'ns1.example.com', - * hostmaster: 'admin.example.com', - * serial: 156696742, - * refresh: 900, - * retry: 900, - * expire: 1800, - * minttl: 60 } ] - * ``` - * @since v10.6.0 - */ - function resolveAny(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve `CAA` records for the `hostname`. On success, - * the `Promise` is resolved with an array of objects containing available - * certification authority authorization records available for the `hostname`(e.g. `[{critical: 0, iodef: 'mailto:pki@example.com'},{critical: 128, issue: 'pki.example.com'}]`). - * @since v15.0.0, v14.17.0 - */ - function resolveCaa(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve `CNAME` records for the `hostname`. On success, - * the `Promise` is resolved with an array of canonical name records available for - * the `hostname` (e.g. `['bar.example.com']`). - * @since v10.6.0 - */ - function resolveCname(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve mail exchange records (`MX` records) for the`hostname`. On success, the `Promise` is resolved with an array of objects - * containing both a `priority` and `exchange` property (e.g.`[{priority: 10, exchange: 'mx.example.com'}, ...]`). - * @since v10.6.0 - */ - function resolveMx(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve regular expression based records (`NAPTR`records) for the `hostname`. On success, the `Promise` is resolved with an array - * of objects with the following properties: - * - * * `flags` - * * `service` - * * `regexp` - * * `replacement` - * * `order` - * * `preference` - * - * ```js - * { - * flags: 's', - * service: 'SIP+D2U', - * regexp: '', - * replacement: '_sip._udp.example.com', - * order: 30, - * preference: 100 - * } - * ``` - * @since v10.6.0 - */ - function resolveNaptr(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve name server records (`NS` records) for the`hostname`. On success, the `Promise` is resolved with an array of name server - * records available for `hostname` (e.g.`['ns1.example.com', 'ns2.example.com']`). - * @since v10.6.0 - */ - function resolveNs(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve pointer records (`PTR` records) for the`hostname`. On success, the `Promise` is resolved with an array of strings - * containing the reply records. - * @since v10.6.0 - */ - function resolvePtr(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve a start of authority record (`SOA` record) for - * the `hostname`. On success, the `Promise` is resolved with an object with the - * following properties: - * - * * `nsname` - * * `hostmaster` - * * `serial` - * * `refresh` - * * `retry` - * * `expire` - * * `minttl` - * - * ```js - * { - * nsname: 'ns.example.com', - * hostmaster: 'root.example.com', - * serial: 2013101809, - * refresh: 10000, - * retry: 2400, - * expire: 604800, - * minttl: 3600 - * } - * ``` - * @since v10.6.0 - */ - function resolveSoa(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve service records (`SRV` records) for the`hostname`. On success, the `Promise` is resolved with an array of objects with - * the following properties: - * - * * `priority` - * * `weight` - * * `port` - * * `name` - * - * ```js - * { - * priority: 10, - * weight: 5, - * port: 21223, - * name: 'service.example.com' - * } - * ``` - * @since v10.6.0 - */ - function resolveSrv(hostname: string): Promise; - /** - * Uses the DNS protocol to resolve text queries (`TXT` records) for the`hostname`. On success, the `Promise` is resolved with a two-dimensional array - * of the text records available for `hostname` (e.g.`[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]`). Each sub-array contains TXT chunks of - * one record. Depending on the use case, these could be either joined together or - * treated separately. - * @since v10.6.0 - */ - function resolveTxt(hostname: string): Promise; - /** - * Performs a reverse DNS query that resolves an IPv4 or IPv6 address to an - * array of host names. - * - * On error, the `Promise` is rejected with an `Error` object, where `err.code`is one of the `DNS error codes`. - * @since v10.6.0 - */ - function reverse(ip: string): Promise; - /** - * Sets the IP address and port of servers to be used when performing DNS - * resolution. The `servers` argument is an array of [RFC 5952](https://tools.ietf.org/html/rfc5952#section-6) formatted - * addresses. If the port is the IANA default DNS port (53) it can be omitted. - * - * ```js - * dnsPromises.setServers([ - * '4.4.4.4', - * '[2001:4860:4860::8888]', - * '4.4.4.4:1053', - * '[2001:4860:4860::8888]:1053', - * ]); - * ``` - * - * An error will be thrown if an invalid address is provided. - * - * The `dnsPromises.setServers()` method must not be called while a DNS query is in - * progress. - * - * This method works much like [resolve.conf](https://man7.org/linux/man-pages/man5/resolv.conf.5.html). - * That is, if attempting to resolve with the first server provided results in a`NOTFOUND` error, the `resolve()` method will _not_ attempt to resolve with - * subsequent servers provided. Fallback DNS servers will only be used if the - * earlier ones time out or result in some other error. - * @since v10.6.0 - * @param servers array of `RFC 5952` formatted addresses - */ - function setServers(servers: ReadonlyArray): void; - /** - * Set the default value of `verbatim` in `dns.lookup()` and `dnsPromises.lookup()`. The value could be: - * - * * `ipv4first`: sets default `verbatim` `false`. - * * `verbatim`: sets default `verbatim` `true`. - * - * The default is `ipv4first` and `dnsPromises.setDefaultResultOrder()` have - * higher priority than `--dns-result-order`. When using `worker threads`,`dnsPromises.setDefaultResultOrder()` from the main thread won't affect the - * default dns orders in workers. - * @since v16.4.0, v14.18.0 - * @param order must be `'ipv4first'` or `'verbatim'`. - */ - function setDefaultResultOrder(order: 'ipv4first' | 'verbatim'): void; - class Resolver { - constructor(options?: ResolverOptions); - cancel(): void; - getServers: typeof getServers; - resolve: typeof resolve; - resolve4: typeof resolve4; - resolve6: typeof resolve6; - resolveAny: typeof resolveAny; - resolveCname: typeof resolveCname; - resolveMx: typeof resolveMx; - resolveNaptr: typeof resolveNaptr; - resolveNs: typeof resolveNs; - resolvePtr: typeof resolvePtr; - resolveSoa: typeof resolveSoa; - resolveSrv: typeof resolveSrv; - resolveTxt: typeof resolveTxt; - reverse: typeof reverse; - setLocalAddress(ipv4?: string, ipv6?: string): void; - setServers: typeof setServers; - } -} -declare module 'node:dns/promises' { - export * from 'dns/promises'; -} diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/accepts/index.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/accepts/index.js deleted file mode 100644 index e9b2f63fb16f8ecdeb16c8eced302612794ccf65..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/accepts/index.js +++ /dev/null @@ -1,238 +0,0 @@ -/*! - * accepts - * Copyright(c) 2014 Jonathan Ong - * Copyright(c) 2015 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict' - -/** - * Module dependencies. - * @private - */ - -var Negotiator = require('negotiator') -var mime = require('mime-types') - -/** - * Module exports. - * @public - */ - -module.exports = Accepts - -/** - * Create a new Accepts object for the given req. - * - * @param {object} req - * @public - */ - -function Accepts (req) { - if (!(this instanceof Accepts)) { - return new Accepts(req) - } - - this.headers = req.headers - this.negotiator = new Negotiator(req) -} - -/** - * Check if the given `type(s)` is acceptable, returning - * the best match when true, otherwise `undefined`, in which - * case you should respond with 406 "Not Acceptable". - * - * The `type` value may be a single mime type string - * such as "application/json", the extension name - * such as "json" or an array `["json", "html", "text/plain"]`. When a list - * or array is given the _best_ match, if any is returned. - * - * Examples: - * - * // Accept: text/html - * this.types('html'); - * // => "html" - * - * // Accept: text/*, application/json - * this.types('html'); - * // => "html" - * this.types('text/html'); - * // => "text/html" - * this.types('json', 'text'); - * // => "json" - * this.types('application/json'); - * // => "application/json" - * - * // Accept: text/*, application/json - * this.types('image/png'); - * this.types('png'); - * // => undefined - * - * // Accept: text/*;q=.5, application/json - * this.types(['html', 'json']); - * this.types('html', 'json'); - * // => "json" - * - * @param {String|Array} types... - * @return {String|Array|Boolean} - * @public - */ - -Accepts.prototype.type = -Accepts.prototype.types = function (types_) { - var types = types_ - - // support flattened arguments - if (types && !Array.isArray(types)) { - types = new Array(arguments.length) - for (var i = 0; i < types.length; i++) { - types[i] = arguments[i] - } - } - - // no types, return all requested types - if (!types || types.length === 0) { - return this.negotiator.mediaTypes() - } - - // no accept header, return first given type - if (!this.headers.accept) { - return types[0] - } - - var mimes = types.map(extToMime) - var accepts = this.negotiator.mediaTypes(mimes.filter(validMime)) - var first = accepts[0] - - return first - ? types[mimes.indexOf(first)] - : false -} - -/** - * Return accepted encodings or best fit based on `encodings`. - * - * Given `Accept-Encoding: gzip, deflate` - * an array sorted by quality is returned: - * - * ['gzip', 'deflate'] - * - * @param {String|Array} encodings... - * @return {String|Array} - * @public - */ - -Accepts.prototype.encoding = -Accepts.prototype.encodings = function (encodings_) { - var encodings = encodings_ - - // support flattened arguments - if (encodings && !Array.isArray(encodings)) { - encodings = new Array(arguments.length) - for (var i = 0; i < encodings.length; i++) { - encodings[i] = arguments[i] - } - } - - // no encodings, return all requested encodings - if (!encodings || encodings.length === 0) { - return this.negotiator.encodings() - } - - return this.negotiator.encodings(encodings)[0] || false -} - -/** - * Return accepted charsets or best fit based on `charsets`. - * - * Given `Accept-Charset: utf-8, iso-8859-1;q=0.2, utf-7;q=0.5` - * an array sorted by quality is returned: - * - * ['utf-8', 'utf-7', 'iso-8859-1'] - * - * @param {String|Array} charsets... - * @return {String|Array} - * @public - */ - -Accepts.prototype.charset = -Accepts.prototype.charsets = function (charsets_) { - var charsets = charsets_ - - // support flattened arguments - if (charsets && !Array.isArray(charsets)) { - charsets = new Array(arguments.length) - for (var i = 0; i < charsets.length; i++) { - charsets[i] = arguments[i] - } - } - - // no charsets, return all requested charsets - if (!charsets || charsets.length === 0) { - return this.negotiator.charsets() - } - - return this.negotiator.charsets(charsets)[0] || false -} - -/** - * Return accepted languages or best fit based on `langs`. - * - * Given `Accept-Language: en;q=0.8, es, pt` - * an array sorted by quality is returned: - * - * ['es', 'pt', 'en'] - * - * @param {String|Array} langs... - * @return {Array|String} - * @public - */ - -Accepts.prototype.lang = -Accepts.prototype.langs = -Accepts.prototype.language = -Accepts.prototype.languages = function (languages_) { - var languages = languages_ - - // support flattened arguments - if (languages && !Array.isArray(languages)) { - languages = new Array(arguments.length) - for (var i = 0; i < languages.length; i++) { - languages[i] = arguments[i] - } - } - - // no languages, return all requested languages - if (!languages || languages.length === 0) { - return this.negotiator.languages() - } - - return this.negotiator.languages(languages)[0] || false -} - -/** - * Convert extnames to mime. - * - * @param {String} type - * @return {String} - * @private - */ - -function extToMime (type) { - return type.indexOf('/') === -1 - ? mime.lookup(type) - : type -} - -/** - * Check if mime is valid. - * - * @param {String} type - * @return {String} - * @private - */ - -function validMime (type) { - return typeof type === 'string' -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ben 10 Protector Of Earth Pc Game Torrent Download ((TOP)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ben 10 Protector Of Earth Pc Game Torrent Download ((TOP)).md deleted file mode 100644 index c40df5ac2032c9e6f4bd457a90fe31ecb3510249..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ben 10 Protector Of Earth Pc Game Torrent Download ((TOP)).md +++ /dev/null @@ -1,6 +0,0 @@ -

          ben 10 protector of earth pc game torrent download


          DOWNLOAD >> https://urlgoal.com/2uCJWU



          - -The savings is a discount off the list price. Product Dimensions: 7.51 x 5.24 x 0.52 inches; 3.2 oz. Hardcover, Video game. Rating, all 10+. 3D glasses included. Price, all $6.49. In one of the best areas of the city, in the center where there seemed to be no place for commerce, at the intersection of several streets leading to the city center, on the sidewalk that seemed more like a roadway than a sidewalk, there was a Muppet shop, and or rather, he did not stand, but stood on wheels. It stood on wheels because it was mobile. Inside it was a large trading floor lined with shop windows. The windows were filled with dolls - dolls that looked like people. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/rinong/StyleGAN-NADA/e4e/configs/__init__.py b/spaces/rinong/StyleGAN-NADA/e4e/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/nasfcos_fpn.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/nasfcos_fpn.py deleted file mode 100644 index c4abfe7bde8a69c1219e7532669761c3e9e64e15..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/nasfcos_fpn.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init -from mmcv.ops.merge_cells import ConcatCell -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFCOS_FPN(BaseModule): - """FPN structure in NASFPN. - - Implementation of paper `NAS-FCOS: Fast Neural Architecture Search for - Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=1, - end_level=-1, - add_extra_convs=False, - conv_cfg=None, - norm_cfg=None, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(NASFCOS_FPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - self.adapt_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - adapt_conv = ConvModule( - in_channels[i], - out_channels, - 1, - stride=1, - padding=0, - bias=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU', inplace=False)) - self.adapt_convs.append(adapt_conv) - - # C2 is omitted according to the paper - extra_levels = num_outs - self.backbone_end_level + self.start_level - - def build_concat_cell(with_input1_conv, with_input2_conv): - cell_conv_cfg = dict( - kernel_size=1, padding=0, bias=False, groups=out_channels) - return ConcatCell( - in_channels=out_channels, - out_channels=out_channels, - with_out_conv=True, - out_conv_cfg=cell_conv_cfg, - out_norm_cfg=dict(type='BN'), - out_conv_order=('norm', 'act', 'conv'), - with_input1_conv=with_input1_conv, - with_input2_conv=with_input2_conv, - input_conv_cfg=conv_cfg, - input_norm_cfg=norm_cfg, - upsample_mode='nearest') - - # Denote c3=f0, c4=f1, c5=f2 for convince - self.fpn = nn.ModuleDict() - self.fpn['c22_1'] = build_concat_cell(True, True) - self.fpn['c22_2'] = build_concat_cell(True, True) - self.fpn['c32'] = build_concat_cell(True, False) - self.fpn['c02'] = build_concat_cell(True, False) - self.fpn['c42'] = build_concat_cell(True, True) - self.fpn['c36'] = build_concat_cell(True, True) - self.fpn['c61'] = build_concat_cell(True, True) # f9 - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_act_cfg = None if i == 0 \ - else dict(type='ReLU', inplace=False) - self.extra_downsamples.append( - ConvModule( - out_channels, - out_channels, - 3, - stride=2, - padding=1, - act_cfg=extra_act_cfg, - order=('act', 'norm', 'conv'))) - - def forward(self, inputs): - """Forward function.""" - feats = [ - adapt_conv(inputs[i + self.start_level]) - for i, adapt_conv in enumerate(self.adapt_convs) - ] - - for (i, module_name) in enumerate(self.fpn): - idx_1, idx_2 = int(module_name[1]), int(module_name[2]) - res = self.fpn[module_name](feats[idx_1], feats[idx_2]) - feats.append(res) - - ret = [] - for (idx, input_idx) in zip([9, 8, 7], [1, 2, 3]): # add P3, P4, P5 - feats1, feats2 = feats[idx], feats[5] - feats2_resize = F.interpolate( - feats2, - size=feats1.size()[2:], - mode='bilinear', - align_corners=False) - - feats_sum = feats1 + feats2_resize - ret.append( - F.interpolate( - feats_sum, - size=inputs[input_idx].size()[2:], - mode='bilinear', - align_corners=False)) - - for submodule in self.extra_downsamples: - ret.append(submodule(ret[-1])) - - return tuple(ret) - - def init_weights(self): - """Initialize the weights of module.""" - super(NASFCOS_FPN, self).init_weights() - for module in self.fpn.values(): - if hasattr(module, 'conv_out'): - caffe2_xavier_init(module.out_conv.conv) - - for modules in [ - self.adapt_convs.modules(), - self.extra_downsamples.modules() - ]: - for module in modules: - if isinstance(module, nn.Conv2d): - caffe2_xavier_init(module) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/hdetr/swin-l-hdetr_sam-vit-b.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/hdetr/swin-l-hdetr_sam-vit-b.py deleted file mode 100644 index d315fc4a84cce44182342578ace77e6a80adf31a..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/hdetr/swin-l-hdetr_sam-vit-b.py +++ /dev/null @@ -1,82 +0,0 @@ -_base_ = [ - '../_base_/datasets/coco_panoptic.py', '../_base_/default_runtime.py' -] - -plugin = True -plugin_dir = 'projects/instance_segment_anything/' - -model = dict( - type='DetWrapperInstanceSAM', - det_wrapper_type='hdetr', - det_wrapper_cfg=dict(aux_loss=False, - backbone='swin_large', - num_classes=91, - cache_mode=False, - dec_layers=6, - dec_n_points=4, - dilation=False, - dim_feedforward=2048, - drop_path_rate=0.5, - dropout=0.0, - enc_layers=6, - enc_n_points=4, - focal_alpha=0.25, - frozen_weights=None, - hidden_dim=256, - k_one2many=6, - lambda_one2many=1.0, - look_forward_twice=True, - masks=False, - mixed_selection=True, - nheads=8, - num_feature_levels=4, - num_queries_one2many=1500, - num_queries_one2one=900, - position_embedding='sine', - position_embedding_scale=6.283185307179586, - remove_difficult=False, - topk=300, - two_stage=True, - use_checkpoint=False, - use_fp16=False, - use_wandb=False, - with_box_refine=True), - det_model_ckpt='ckpt/swin_l_hdetr.pth', - num_classes=80, - model_type='vit_b', - sam_checkpoint='ckpt/sam_vit_b_01ec64.pth', - use_sam_iou=True, -) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -# test_pipeline, NOTE the Pad's size_divisor is different from the default -# setting (size_divisor=32). While there is little effect on the performance -# whether we use the default setting or use size_divisor=1. - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=1), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -dataset_type = 'CocoDataset' -data_root = 'data/coco/' - -data = dict( - samples_per_gpu=1, - workers_per_gpu=1, - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/ops/src/cpu/ms_deform_attn_cpu.cpp b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/ops/src/cpu/ms_deform_attn_cpu.cpp deleted file mode 100644 index e1bf854de1f3860d20b6fef5c1a17817c268e70a..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/ops/src/cpu/ms_deform_attn_cpu.cpp +++ /dev/null @@ -1,41 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include - -#include -#include - - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - diff --git a/spaces/rootvisionai/few_shot_sam/README.md b/spaces/rootvisionai/few_shot_sam/README.md deleted file mode 100644 index ac44b584590cb1cdc789b47d964a135776aaff44..0000000000000000000000000000000000000000 --- a/spaces/rootvisionai/few_shot_sam/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Few Shot Sam -emoji: 📚 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rorallitri/biomedical-language-models/logs/Aparichit Full __HOT__ Movie Download Kickass.md b/spaces/rorallitri/biomedical-language-models/logs/Aparichit Full __HOT__ Movie Download Kickass.md deleted file mode 100644 index e2407af43e9120404745c35934cdef61744ec714..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Aparichit Full __HOT__ Movie Download Kickass.md +++ /dev/null @@ -1,5 +0,0 @@ -
          -

          Motu Patlu - King Of Kings Blu-ray 1080p Movies ???? ???? Click Here ===> =2sLYWqAKAMotu Patlu: King of KingsDavid Dhawan: The story and inspiration.. free download amazon mp3 [filmy full hd] / motu patlu king of kings 1080p full movie download.Motu Patlu King Of Kings Audio Hindi Movie Free Download 720p BluRay.Motu Patlu: King Of Kings (2017) Movie Free Download 720p BluRay. Motu Patlu or Machine Patlu (born 21 December) is a Tamil.Directed by David Dhawan. With music by Sooryan and lyrics by Vairamuthu. Motu.. free download amazon mp3 [filmy full hd] / motu patlu king of kings 1080p full movie download.movie free download. Motu Patlu is a 1997 Tamil action-drama film directed by David Dhawan and starring Paresh Rawal, Paresh Rawal, Rakhee.Free HD 720p 1080p BluRay 2015 Movieflix The (FULL) movie download in SD,HD 720p 1080p Movie your kings on hi fi.. free download amazon mp3 [filmy full hd] / motu patlu king of kings 1080p full movie download.Motu Patlu - King of Kings. Full Movie Free Download 720p. Motu Patlu audio video is available for free download... Directed by David Dhawan. With music by Sooryan and lyrics by Vairamuthu. Motu Patlu.. Motu Patlu: King of Kings (2017) Full Movie Free Download 720p HDTV 720p.Motu Patlu King Of Kings In Hindi Full Movie - Motu Patlu King Of Kings In Hindi Full Movie. Motu Patlu King of Kings In Hindi HD Full Movie Motu Patlu..Motu Patlu King Of Kings 2017 Full Movie Free Download [2016] DVD-XS.comMoviesfree Download Free Movies Online Watch Movies Online 2017 Free Hollywood Movies Movies Free HD Free HD Movies Movies 2014 HD 4K Bollywood Free 4k Movies 2018 Download 4k Freely At.. Motu Patlu - King of Kings. Full Movie Free Download 720p. Motu Patlu audio video is available for free download... Motu Patlu - King Of Kings. Full Movie Free Download 720p BluRay. Motu Patlu Audio Video Free Download.Hindi Movie - Motu Patlu - ee730c9e81

          -

          Aparichit Full Movie Download Kickass


          Download Ziphttps://tinurll.com/2uzof1



          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Chief Architect Premier X12 22.5.2.56 Patched Full Version ((LINK)).md b/spaces/rorallitri/biomedical-language-models/logs/Chief Architect Premier X12 22.5.2.56 Patched Full Version ((LINK)).md deleted file mode 100644 index dfd8be0f320d6b5a487cdbee019bb0ad4755d6a7..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Chief Architect Premier X12 22.5.2.56 Patched Full Version ((LINK)).md +++ /dev/null @@ -1,6 +0,0 @@ -

          Chief Architect Premier X12 22.5.2.56 Patched full version


          Download ✫✫✫ https://tinurll.com/2uzmo4



          - - 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Episode 1.4 with Subtitles How to Watch the Show in Your Preferred Language.md b/spaces/rorallitri/biomedical-language-models/logs/Download Episode 1.4 with Subtitles How to Watch the Show in Your Preferred Language.md deleted file mode 100644 index 7a8e5121e6a6bd8b4c41e7426f90e464e7d5674a..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Episode 1.4 with Subtitles How to Watch the Show in Your Preferred Language.md +++ /dev/null @@ -1,36 +0,0 @@ -
          -

          The feed will be added and automatically fetched from the server. Any errors in communicating with the server will cause the item to be marked with a red cross. Once the channel has been successfully fetched, a list of episodes will exist as children of the item. Expand the item (by clicking on the + icon) to view children.

          -

          download Episode 1.4


          Download ✵✵✵ https://tinurll.com/2uzoyF



          -

          There are two ways of listening to episodes. They may be streamed directly from the server, or downloaded to your machine for permanent availability. A downloaded episode may be removed from the disk at will.

          -

          1.Stream or download on request: Podcast episodes must be explicitly downloaded to be listened to. They may still be streamed at request.2.Automatically download media: If a new episode has been retrieved, the data will be automatically downloaded to your computer3.Add to media device transfer queue: When episodes are fetched, they will be queued for automatic transfer to your portable music device when it is next connected.

          -

          Selecting the "Download document" link from the tables below takes you to version-specific document containing the download links. Select the version of the Installation Manager or Packaging Utility that you want to download and install:

          -

          download ?search=604 unlimited Movies and videos Download Here.?search=604 Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.

          -

          It was on your site, even if you didn't make a post about it here. The exes are dated 5th April, and I've been using it for more than a month. No big deal, just I nearly didn't bother downloading this new version, assuming I already had it.

          -

          -

          Appvn Android is one of the best websites online to download APK apps or files. With Appvn Android, you can download the best best free android games, best free android apps for Android tablet or Android phone available. At this website you can get the APK Data for some of the most popular android games & android apps like Minecraft: Pocket Edition, Appvn, CF Mobile, KingRoot, Lucky Patcher and many more.

          -

          Code can be checked out from the Git servers via anonymous read-only access. Development code can also be checked out from the Asterisk and libpri Gerrit repositories, and DAHDI Git repository. Below are example commands you might use to download the source from the various repositories.

          -

          APKFab.com and the download link of this app are 100% safe. All download links of apps listed on APKFab.com are from Google Play Store or submitted by users. For the app from Google Play Store, APKFab.com won't modify it in any way. For the app submitted by users, APKFab.com will verify its APK signature safety before release it on our website.

          -

          The MIMIC-III database was populated with data that had been acquired during routine hospital care, so there was no associated burden on caregivers and no interference with their workflow. Data was downloaded from several sources, including:

          -

          Approval requires at least a week. Once an application has been approved the researcher will receive emails containing instructions for downloading the database from PhysioNetWorks, a restricted access component of PhysioNet.

          -

          Here are all the published episodes in the History in the Bible podcast, listed from most recent (season 3) to oldest (season 1). The episodes are listed in reverse chronological order, from most recent to oldest.

          -

          If you are a new or casual listener, you may be wondering where to start. Do you have to listen to episodes in order to make sense of the show? Not really. I try to make each show as self-contained as possible. I often provide brief contextual information from previous episodes where I think appropriate. But the history in the Bible is a complex topic. So I often make callbacks (audio hyperlinks, you could say) to specific earlier shows if I need to.

          -

          In this bonus episode, Steve Guerra of the History of the Papacy podcast and I continue our look at some of our favourite moments in the Old Testament or Tanakh. First, Steve investigates the unfortunate incident of Dinah and the Hebites. Then Garry shows a little-known side to Joseph's rule in Egypt.

          -

          The final episodes in my show cover the second half of the second century. In this period, the little Jesus clubs evolved into the imperial church incorporate. This and the next few episodes cover the three dominant personalities of that period.

          -

          Steve Guerra from the History in the Papacy podcast and I conclude our mini-series on the Twelve minor prophets of the OT/Tanakh. In this episode we have a bit of fun and rank the Twelve using our own entirely ridiculous criteria.

          -

          In this bonus episode, Steve Guerra of the History of the Papacy podcast and I take a look at some of our favourite moments in the Old Testament or Tanakh. First, Steve wonders what the deal is with Melchizedek.

          -

          In this bonus episode, Gil Kidron of A Podcast of Biblical Proportions and I discuss Gil's theory that the scribe Baruch was instrumental in writing (or editing or redacting) the book of Genesis, using the life of his master the prophet Jeremiah as a model.

          -

          By the year 70, all of the disciples save perhaps John, were dead. Their inheritors are traditionally known as the Apostolic fathers, although many scholars would object to that appellation. I explore the fathers in this and the next episode. In this show I present the very earliest: Bishop Clement of Rome, and the anonymous author of the Didache, a fascinating look into earliest Christian practices. Along the way I speculate about the relationship between Jewish and Christian practices of the time. Who took what from whom? I finish up with the earliest Christian commentator, Papias of Hierapolis; and with Ignatius of Antioch.

          -

          In this bonus episode, Gil Kidron and Rutger Vos graciously invite me on to their long-running show Pod Academy. This show is dedicated to applying a critical intellect to popular media, especially movies or TV series. We discuss the 2014 movie Noah, staring Russell Crowe, Anthony Hopkins, Jennifer Connelly, Emma Watson and Ray Winstone, doing what Ray Winstone always does: being himself.

          -

          We believe there were four distinct early Christian communities: the gentile clubs of Paul, the Jewish clubs of Peter and James, the clubs who followed the writings of Thomas, and clubs attached to the disciple John. In this episode I tackle the first three.

          -

          In this bonus episode I am joined by Omri and Gil of the exuberant podcast A Podcast of Biblical Proportions for a discussion about the Patriarchs Abraham, Isaac, Jacob, and a little bit about Joseph..

          -

          In this bonus episode, Steve Guerra and I continue our series on the twelve minor Old Testament prophets. Here we tackle Obadiah and Jonah. Obadiah is the least read book in the Bible, for very good reason. Everyone thinks that Jonah is a story about a fish. That is the least part of the story.

          -

          In this bonus episode, Steve Guerra and I continue our series on the twelve minor Old Testament prophets. Here we tackle Amos and Hosea. Amos is the perfect pocket prophet. Amos gives you everything you want in a prophet, in a easily digestible form. On the other hand, his near contemporary Hosea is barking mad. Hear all about them in this episode.

          -

          In this bonus episode I am joined by Steve Guerra of the History of the Papacy podcast. We introduce our latest mini-series, the twelve minor prophets of the Old Testament. We will cover Hosea, Joel, Amos, Obadiah, Jonah, Micah, Nahum, Habakkuk, Zephaniah, Haggai, Zechariah, and Malachi.

          -

          As usual, Paul is violently ejected. This time, the pagans are to blame. Paul tells all and sundry that he has the monopoly on religious trinkets. That really upsets all those making a fortune selling relics of the great goddess Artemis. Paul barely makes it out town with his skin intact. He travels through Macedonia and Greece, then back to Asia Minor. He delivers a melancholy speech at Miletus, and reluctantly turns toward Jerusalem. I finish the episode with a letter Paul wrote during the mission, his angry letter to the Galatians, where he denounces the circumcision party of James and Peter.

          -

          This is a bonus episode for season two. Steve Guerra and I tackle the book of Daniel. We all know the book's stories of Daniel: the lion's den, the fiery furnace, and the writing on the wall. We discover a book of two parts, one of which claims to be a reliable history of Babylonian times. Spoiler: It's not. The second half is the only apocalypse in the Old Testament.

          -

          This is another bonus episode for season two. My long-time collaborator, Steve Guerra, attended the IntelligentSpeech podcasting conference in New York in June 2019. I appeared with Steve thanks to the magic that is Skype. We talk all things Biblical. I hope you enjoy this bonus show. The conference was organized by Roifield Brown, producer of numerous podcasts: How Jamaica Conquered the World, and The Things That Made England, amongst others. Roifield was the man who introduced me to history podcasting.

          -

          My Christmas special relates the story of Christmas as told by the gospel of Luke. With lots of canticles: the Magnificat, the Benedictus, and the Nunc Dimmitis. Luke has many unique stories. He concentrates on Jesus' mother Mary and her relative Elizabeth, mother of John the Baptist. The angel Gabriel makes two cameo appearances. My next special episode, on Epiphany, relates the gospel of Matthew's version of events of the nativity.

          -

          First in a mini-series on the history of the Jews and the province of Judea under the Hellenistic empires, and under the Maccabeans. I start with a summary of the history I will expand on in the next few episodes. Then I present our sources for that history, Josephus and Maccabees. I conclude with a few notes about the oddities of the Ethiopian orthodox biblical canon.

          -

          The Jews have a placid existence under Persian rule, and create Judaism. They reconstruct their religion, one now without kings and prophets. From now on, the Law is all. I discuss the last of the books of the Tanakh: the romances of Esther and Judith, the hateful but mercifully brief prophet Obadiah, and the funniest book in the canon, Jonah. Daniel gets his chance in a later episode.

          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Khallballi-Fun Unlimited 1080p Movies Free Discover the Secrets of this Comedy Thriller that Never Hit the Theaters.md b/spaces/rorallitri/biomedical-language-models/logs/Khallballi-Fun Unlimited 1080p Movies Free Discover the Secrets of this Comedy Thriller that Never Hit the Theaters.md deleted file mode 100644 index 95797a85afae9f362535ddde163321e9b41b0a38..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Khallballi-Fun Unlimited 1080p Movies Free Discover the Secrets of this Comedy Thriller that Never Hit the Theaters.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Khallballi-Fun Unlimited 1080p movies free


          DOWNLOAD ✦✦✦ https://tinurll.com/2uznlG



          -
          - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/__init__.py b/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/scedlatioru/img-to-music/example/Battle Of Empires 1914-1918 Honor Of The Empire Hack Torrent.md b/spaces/scedlatioru/img-to-music/example/Battle Of Empires 1914-1918 Honor Of The Empire Hack Torrent.md deleted file mode 100644 index 5a7cb84c195071f01506d00c2f4644054cb771d5..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Battle Of Empires 1914-1918 Honor Of The Empire Hack Torrent.md +++ /dev/null @@ -1,9 +0,0 @@ -
          -

          circle empires rivals cracked pc game in direct links and torrent. circle empires rivals on this fast-paced multiplayer rts, you and your mates will management your individual tiny kingdoms and conquer a world made up of circles. every one is linked to all the others.

          -

          circle empires rivals cracked pc game in direct links and torrent. circle empires rivals on this fast-paced multiplayer rts, you and your mates will management your individual tiny kingdoms and conquer a world made up of circles. every one is

          -

          Battle Of Empires 1914-1918 Honor Of The Empire Hack Torrent


          Download Zip ☆☆☆ https://gohhs.com/2uEA0S



          -

          war of empires 2 is a free-to-play online multiplayer shooter game developed by german studio hardcore gaming. the game is a remake of the 1996 game war of empires, and is set in a period of world war i and the russian civil war. this game has been modded and is an all-new game. this mod is an all-new game with a new engine. this mod replaces the original game's campaign and all other game content (all of the levels, all of the units, all of the buildings, etc.).

          -

          war of empires 2 is a free-to-play online multiplayer game developed by german studio hardcore gaming. the game is a remake of the 1996 game war of empires, and is set in a period of world war i and the russian civil war. this game has been modded and is an all-new game. this mod is an all-new game with a new engine. this mod replaces the original game's campaign and all other game content (all of the levels, all of the units, all of the buildings, etc.).

          -

          war of empires 2 is a free-to-play online multiplayer game developed by german studio hardcore gaming. the game is a remake of the 1996 game war of empires, and is set in a period of world war i and the russian civil war. this game has been modded and is an all-new game. this mod is an all-new game with a new engine.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Crack Para Doccf 2.5 LINK.md b/spaces/scedlatioru/img-to-music/example/Crack Para Doccf 2.5 LINK.md deleted file mode 100644 index a54e2e36c70f5a4c5eb5385cb9156eacd1ae1ca9..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Crack Para Doccf 2.5 LINK.md +++ /dev/null @@ -1,11 +0,0 @@ -

          Crack para doccf 2.5


          Downloadhttps://gohhs.com/2uEAgf



          -
          -Crack for doccf 2.5 DOWNLOAD: 372a6038bc Related HeliconSoft Helicon Focus V5.3.7 including Crack Key [iahq76] Download noor ul . Download full version doccf cfg with crack for Microsoft Office Word (Word 2007) and Excel (Excel 2007). doccf - free downloads and reviews - CNET Download. -docf for doccf 2.5. -Doccf for doccf 2.5 - download, doccf for doccf 2.5. -Download Microsoft Word - Microsoft Office Docs. -Download Office 2013 for free [687. -Download Microsoft Word - Microsoft Office Docs. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Havit Mouse Hv-ms672 Software HOT Download.md b/spaces/scedlatioru/img-to-music/example/Havit Mouse Hv-ms672 Software HOT Download.md deleted file mode 100644 index b11449a5ad249eafacbdf875adf98a2921dbdefc..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Havit Mouse Hv-ms672 Software HOT Download.md +++ /dev/null @@ -1,65 +0,0 @@ - -

          Havit Mouse HV-MS672 Software Download: How to Customize Your Gaming Mouse

          - -

          If you are a gamer who wants to have more control and precision over your mouse movements, you may want to try Havit Mouse HV-MS672 Software Download. This is a software that can help you adjust the settings of your Havit Mouse HV-MS672, a gaming mouse that offers high performance and comfort.

          - -

          Havit Mouse HV-MS672 Software Download is a software that can help you change the DPI, polling rate, lighting effects, macros, and buttons of your Havit Mouse HV-MS672. You can also create different profiles for different games and applications.

          -

          havit mouse hv-ms672 software download


          Download Filehttps://gohhs.com/2uEAtw



          - -

          In this article, we will explain what Havit Mouse HV-MS672 Software Download is, how it works, and how you can download it for free and enjoy its features.

          - -

          What is Havit Mouse HV-MS672 Software Download?

          - -

          Havit Mouse HV-MS672 Software Download is a software that can help you customize your Havit Mouse HV-MS672, a gaming mouse that has a ergonomic design, 7 programmable buttons, 4 adjustable DPI levels, and 7 breathing LED lights.

          - -

          Havit Mouse HV-MS672 Software Download can help you fine-tune your mouse settings according to your preferences and gaming needs. You can use the software to:

          - -
            -
          • Change the DPI (dots per inch) of your mouse, which affects the sensitivity and speed of your cursor movement. You can choose from 4 levels: 800, 1200, 1600, and 2400 DPI.
          • -
          • Change the polling rate of your mouse, which affects the responsiveness and accuracy of your mouse. You can choose from 4 levels: 125 Hz, 250 Hz, 500 Hz, and 1000 Hz.
          • -
          • Change the lighting effects of your mouse, which affects the color and mode of the LED lights on your mouse. You can choose from 7 colors: red, green, blue, yellow, purple, cyan, and white. You can also choose from 3 modes: constant light, breathing light, and off.
          • -
          • Change the macros of your mouse, which affects the actions that are performed when you press a button on your mouse. You can record and assign different keystrokes or commands to each button on your mouse.
          • -
          • Change the buttons of your mouse, which affects the function of each button on your mouse. You can assign different functions to each button on your mouse, such as left click, right click, scroll up, scroll down, forward, backward, etc.
          • -
          - -

          Havit Mouse HV-MS672 Software Download can also help you create different profiles for different games and applications. You can save up to 5 profiles on your computer and switch between them easily using the software or the buttons on your mouse.

          - -

          How to Download Havit Mouse HV-MS672 Software for Free?

          - -

          If you want to download Havit Mouse HV-MS672 Software for free and enjoy its features, you can follow these simple steps:

          - -
            -
          1. Click on this link: https://www.havit.hk/downloads/
          2. -
          3. Scroll down to find the product named "HV-MS672 User Manual"
          4. -
          5. Click on "Download" to download the file named "HV-MS672.zip"
          6. -
          7. Extract the file using WinRAR or any other software that can unzip files
          8. -
          9. Run the file named "setup.exe" to install the software
          10. -
          11. Follow the instructions on the screen to complete the installation
          12. -
          13. Launch the software by double-clicking on the icon on your desktop or system tray
          14. -
          15. Connect your Havit Mouse HV-MS672 to your computer using a USB port
          16. -
          17. Enjoy Havit Mouse HV-MS672 Software!
          18. -
          - -

          Note: You may need to disable your antivirus or firewall before downloading or installing Havit Mouse HV-MS672 Software, as they may detect it as a threat or block it from running.

          - -

          Conclusion

          - -

          Havit Mouse HV-MS672 Software Download is a software that can help you customize your Havit Mouse HV-MS672, a gaming mouse that offers high performance and comfort.

          -

          - -

          It can help you adjust the DPI, polling rate, lighting effects, macros, and buttons of your mouse according to your preferences and gaming needs. It can also help you create different profiles for different games and applications.

          - -

          You can download Havit Mouse HV-MS672 Software Download for free from this link: https://www.havit.hk/downloads/

          - -

          If you want to have more control and precision over your mouse movements, you should give Havit Mouse HV-MS672 Software Download a try today!

          -

          Conclusion

          - -

          Havit Mouse HV-MS672 Software Download is a software that can help you customize your Havit Mouse HV-MS672, a gaming mouse that offers high performance and comfort.

          - -

          It can help you adjust the DPI, polling rate, lighting effects, macros, and buttons of your mouse according to your preferences and gaming needs. It can also help you create different profiles for different games and applications.

          - -

          You can download Havit Mouse HV-MS672 Software Download for free from this link: https://www.havit.hk/downloads/

          - -

          If you want to have more control and precision over your mouse movements, you should give Havit Mouse HV-MS672 Software Download a try today!

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet/bin/st_trans.py b/spaces/segments-tobias/conex/espnet/bin/st_trans.py deleted file mode 100644 index 227a10b0a222f95335d0db67119b55a39aff5ad4..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/bin/st_trans.py +++ /dev/null @@ -1,183 +0,0 @@ -#!/usr/bin/env python3 -# encoding: utf-8 - -# Copyright 2019 Kyoto University (Hirofumi Inaguma) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""End-to-end speech translation model decoding script.""" - -import logging -import os -import random -import sys - -import configargparse -import numpy as np - - -# NOTE: you need this func to generate our sphinx doc -def get_parser(): - """Get default arguments.""" - parser = configargparse.ArgumentParser( - description="Translate text from speech using a speech translation " - "model on one CPU or GPU", - config_file_parser_class=configargparse.YAMLConfigFileParser, - formatter_class=configargparse.ArgumentDefaultsHelpFormatter, - ) - # general configuration - parser.add("--config", is_config_file=True, help="Config file path") - parser.add( - "--config2", - is_config_file=True, - help="Second config file path that overwrites the settings in `--config`", - ) - parser.add( - "--config3", - is_config_file=True, - help="Third config file path that overwrites " - "the settings in `--config` and `--config2`", - ) - - parser.add_argument("--ngpu", type=int, default=0, help="Number of GPUs") - parser.add_argument( - "--dtype", - choices=("float16", "float32", "float64"), - default="float32", - help="Float precision (only available in --api v2)", - ) - parser.add_argument( - "--backend", - type=str, - default="chainer", - choices=["chainer", "pytorch"], - help="Backend library", - ) - parser.add_argument("--debugmode", type=int, default=1, help="Debugmode") - parser.add_argument("--seed", type=int, default=1, help="Random seed") - parser.add_argument("--verbose", "-V", type=int, default=1, help="Verbose option") - parser.add_argument( - "--batchsize", - type=int, - default=1, - help="Batch size for beam search (0: means no batch processing)", - ) - parser.add_argument( - "--preprocess-conf", - type=str, - default=None, - help="The configuration file for the pre-processing", - ) - parser.add_argument( - "--api", - default="v1", - choices=["v1", "v2"], - help="Beam search APIs " - "v1: Default API. " - "It only supports the ASRInterface.recognize method and DefaultRNNLM. " - "v2: Experimental API. " - "It supports any models that implements ScorerInterface.", - ) - # task related - parser.add_argument( - "--trans-json", type=str, help="Filename of translation data (json)" - ) - parser.add_argument( - "--result-label", - type=str, - required=True, - help="Filename of result label data (json)", - ) - # model (parameter) related - parser.add_argument( - "--model", type=str, required=True, help="Model file parameters to read" - ) - # search related - parser.add_argument("--nbest", type=int, default=1, help="Output N-best hypotheses") - parser.add_argument("--beam-size", type=int, default=1, help="Beam size") - parser.add_argument("--penalty", type=float, default=0.0, help="Incertion penalty") - parser.add_argument( - "--maxlenratio", - type=float, - default=0.0, - help="""Input length ratio to obtain max output length. - If maxlenratio=0.0 (default), it uses a end-detect function - to automatically find maximum hypothesis lengths""", - ) - parser.add_argument( - "--minlenratio", - type=float, - default=0.0, - help="Input length ratio to obtain min output length", - ) - # multilingual related - parser.add_argument( - "--tgt-lang", - default=False, - type=str, - help="target language ID (e.g., , , and etc.)", - ) - return parser - - -def main(args): - """Run the main decoding function.""" - parser = get_parser() - args = parser.parse_args(args) - - # logging info - if args.verbose == 1: - logging.basicConfig( - level=logging.INFO, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - elif args.verbose == 2: - logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - else: - logging.basicConfig( - level=logging.WARN, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - logging.warning("Skip DEBUG/INFO messages") - - # check CUDA_VISIBLE_DEVICES - if args.ngpu > 0: - cvd = os.environ.get("CUDA_VISIBLE_DEVICES") - if cvd is None: - logging.warning("CUDA_VISIBLE_DEVICES is not set.") - elif args.ngpu != len(cvd.split(",")): - logging.error("#gpus is not matched with CUDA_VISIBLE_DEVICES.") - sys.exit(1) - - # TODO(mn5k): support of multiple GPUs - if args.ngpu > 1: - logging.error("The program only supports ngpu=1.") - sys.exit(1) - - # display PYTHONPATH - logging.info("python path = " + os.environ.get("PYTHONPATH", "(None)")) - - # seed setting - random.seed(args.seed) - np.random.seed(args.seed) - logging.info("set random seed = %d" % args.seed) - - # trans - logging.info("backend = " + args.backend) - if args.backend == "pytorch": - # Experimental API that supports custom LMs - from espnet.st.pytorch_backend.st import trans - - if args.dtype != "float32": - raise NotImplementedError( - f"`--dtype {args.dtype}` is only available with `--api v2`" - ) - trans(args) - else: - raise ValueError("Only pytorch are supported.") - - -if __name__ == "__main__": - main(sys.argv[1:]) diff --git a/spaces/senquan/ChuanhuChatGPT/run_Windows.bat b/spaces/senquan/ChuanhuChatGPT/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/senquan/ChuanhuChatGPT/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/seok07/1JK50/vc_infer_pipeline.py b/spaces/seok07/1JK50/vc_infer_pipeline.py deleted file mode 100644 index 7261742c30f64df435ed3fdebaafd969e9563d98..0000000000000000000000000000000000000000 --- a/spaces/seok07/1JK50/vc_infer_pipeline.py +++ /dev/null @@ -1,363 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss,librosa -from scipy import signal -from functools import lru_cache - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav={} -@lru_cache -def cache_harvest_f0(input_audio_path,fs,f0max,f0min,frame_period): - audio=input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - -def change_rms(data1,sr1,data2,sr2,rate):#1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms(y=data1, frame_length=sr1//2*2, hop_length=sr1//2)#每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2//2*2, hop_length=sr2//2) - rms1=torch.from_numpy(rms1) - rms1=F.interpolate(rms1.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze() - rms2=torch.from_numpy(rms2) - rms2=F.interpolate(rms2.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze() - rms2=torch.max(rms2,torch.zeros_like(rms2)+1e-6) - data2*=(torch.pow(rms1,torch.tensor(1-rate))*torch.pow(rms2,torch.tensor(rate-1))).numpy() - return data2 - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0(self, input_audio_path,x, p_len, f0_up_key, f0_method,filter_radius, inp_f0=None): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path]=x.astype(np.double) - f0=cache_harvest_f0(input_audio_path,self.sr,f0_max,f0_min,10) - if(filter_radius>2): - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0])if version=="v1"else logits[0] - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(input_audio_path,audio_pad, p_len, f0_up_key, f0_method,filter_radius, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if(rms_mix_rate!=1): - audio_opt=change_rms(audio,16000,audio_opt,tgt_sr,rms_mix_rate) - if(resample_sr>=16000 and tgt_sr!=resample_sr): - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max=np.abs(audio_opt).max()/0.99 - max_int16=32768 - if(audio_max>1):max_int16/=audio_max - audio_opt=(audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/sh0kul/DTPDC-Deploy/app.py b/spaces/sh0kul/DTPDC-Deploy/app.py deleted file mode 100644 index 735d546d650fb9b71889095514e84261f1c14637..0000000000000000000000000000000000000000 --- a/spaces/sh0kul/DTPDC-Deploy/app.py +++ /dev/null @@ -1,167 +0,0 @@ -import torch -import torch.nn.functional as F -from PIL import Image -import pytorch_lightning as pl -import torch.nn as nn -from torchvision import transforms as T -from torchvision import models -import matplotlib.pyplot as plt -import onnxruntime as ort -from glob import glob -import streamlit as st -import numpy as np -from torchmetrics.functional import accuracy -from torchmetrics import Accuracy - -#Define the labels -labels = ['Defect', 'Non-Defect'] - -# Define the sample images -sample_images = { - "Defect01": "pics/Defect/2.jpg", - "Defect02": "pics/Defect/6.jpg", - "Defect03": "pics/Defect/8.jpg", - "Non-Defect01": "pics/nDefect/3.jpg", - "Non-Defect02": "pics/nDefect/4.jpg", - "Non-Defect03": "pics/nDefect/8.jpg" -} - -class DefectResNet(pl.LightningModule): - def __init__(self, n_classes=2): - super(DefectResNet, self).__init__() - - # จำนวนของพันธุ์output (2) - self.n_classes = n_classes - - #เปลี่ยน layer สุดท้าย - self.backbone = models.resnet50(pretrained=True) - # self.backbone = models.resnet152(pretrained=True) - # self.backbone = models.vgg19(pretrained=True) - for param in self.backbone.parameters(): - param.requires_grad = False - - # เปลี่ยน fc layer เป็น output ขนาด 2 - self.backbone.fc = torch.nn.Linear(self.backbone.fc.in_features, n_classes) #For ResNet base mdoel - # self.backbone.classifier[6] = torch.nn.Linear(self.backbone.classifier[6].in_features, n_classes) #For VGG bse model - - self.entropy_loss = nn.CrossEntropyLoss() - self.accuracy = Accuracy(task="multiclass", num_classes=2) - - self.save_hyperparameters(logger=False) - - def forward(self, x): - preds = self.backbone(x) - return preds - - def training_step(self, batch, batch_idx): - x, y = batch - logits = self.backbone(x) - loss = self.entropy_loss(logits, y) - y_pred = torch.argmax(logits, dim=1) - self.log("train_loss", loss) - self.log("train_acc", self.accuracy(y_pred, y)) - return loss - - def validation_step(self, batch, batch_idx): - x, y = batch - logits = self.backbone(x) - loss = self.entropy_loss(logits, y) - y_pred = torch.argmax(logits, dim=1) - self.log("val_loss", loss) - self.log("val_acc", self.accuracy(y_pred, y)) - return loss - - def configure_optimizers(self): - self.optimizer = torch.optim.AdamW(self.parameters(), lr=1e-3) - return { - "optimizer": self.optimizer, - "monitor": "val_loss", - } - - def test_step(self, batch, batch_idx): - x, y = batch - logits = self.backbone(x) - loss = self.entropy_loss(logits, y) - y_pred = torch.argmax(logits, dim=1) - self.log("val_loss", loss) - self.log("val_acc", self.accuracy(y_pred, y)) - return loss - - def _shared_eval_step(self, batch, batch_idx): - x, y = batch - y_hat = self.model(x) - logits = self.backbone(x) - loss = self.entropy_loss(logits, y) - acc = accuracy(y_hat, y) - return loss, acc - -# Load the model on the appropriate device -loadmodel = DefectResNet() -def load_checkpoint(checkpoint): - loadmodel.load_state_dict(checkpoint["state_dict"]) -load_checkpoint(torch.load("models/model.ckpt", map_location=torch.device('cpu'))) -loadmodel.eval() - -transform = T.Compose([ - T.Resize((224, 224)), - T.ToTensor() -]) - -def predict(image): - image = transform(image).unsqueeze(0) - - # Perform the prediction - with torch.no_grad(): - logits = loadmodel(image) - probs = F.softmax(logits, dim=1) - return probs - -# Define the Streamlit app -def app(): - predictions = None - st.title("Digital textile printing defect classification for industrial.") - uploaded_file = st.file_uploader("Upload your image...", type=["jpg"]) - - with st.expander("Or choose from sample here..."): - sample = st.selectbox(label = "Select here", options = list(sample_images.keys()), label_visibility="hidden") - col1, col2, col3 = st.columns(3) - with col1: - st.image(sample_images["Defect01"], caption="Defect01", use_column_width=True) - with col2: - st.image(sample_images["Defect02"], caption="Defect02", use_column_width=True) - with col3: - st.image(sample_images["Defect03"], caption="Defect03", use_column_width=True) - col1, col2, col3 = st.columns(3) - with col1: - st.image(sample_images["Non-Defect01"], caption="Non-Defect01", use_column_width=True) - with col2: - st.image(sample_images["Non-Defect02"], caption="Non-Defect02", use_column_width=True) - with col3: - st.image(sample_images["Non-Defect03"], caption="Non-Defect03", use_column_width=True) - - # If an image is uploaded, make a prediction on it - if uploaded_file is not None: - image = Image.open(uploaded_file) - st.image(image, caption="Uploaded Image", use_column_width=True) - predictions = predict(image) - elif sample: - image = Image.open(sample_images[sample]) - st.image(image, caption=sample.capitalize() + " Image", use_column_width=True) - predictions = predict(image) - - # Show predictions with their probabilities - if predictions is not None: - # st.write(predictions) - st.subheader(f'Predictions : {labels[torch.argmax(predictions[0]).item()]}') - for pred, prob in zip(labels, predictions[0]): - st.write(f"{pred}: {prob * 100:.2f}%") - st.progress(prob.item()) - else: - st.write("No predictions.") - st.subheader("Credits") - st.write("By : Settapun Laoaree | AI-Builders") - st.markdown("Source : [Github](https://github.com/ShokulSet/DefectDetection-AIBuilders) [Hugging Face](https://huggingface.co/spaces/sh0kul/DefectDetection-Deploy)") - -# Run the app -if __name__ == "__main__": - app() \ No newline at end of file diff --git a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py b/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py deleted file mode 100644 index 052df6220595a1b39b7e2aea37ca4872d113dfd2..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py +++ /dev/null @@ -1,395 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR model and criterion classes. -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ -# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR) -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# ------------------------------------------------------------------------ -import copy -from typing import List - -import torch -import torch.nn.functional as F -from torch import nn -from torchvision.ops.boxes import nms -from transformers import AutoTokenizer, BertModel, BertTokenizer, RobertaModel, RobertaTokenizerFast - -from groundingdino.util import box_ops, get_tokenlizer -from groundingdino.util.misc import ( - NestedTensor, - accuracy, - get_world_size, - interpolate, - inverse_sigmoid, - is_dist_avail_and_initialized, - nested_tensor_from_tensor_list, -) -from groundingdino.util.utils import get_phrases_from_posmap -from groundingdino.util.visualizer import COCOVisualizer -from groundingdino.util.vl_utils import create_positive_map_from_span - -from ..registry import MODULE_BUILD_FUNCS -from .backbone import build_backbone -from .bertwarper import ( - BertModelWarper, - generate_masks_with_special_tokens, - generate_masks_with_special_tokens_and_transfer_map, -) -from .transformer import build_transformer -from .utils import MLP, ContrastiveEmbed, sigmoid_focal_loss - - -class GroundingDINO(nn.Module): - """This is the Cross-Attention Detector module that performs object detection""" - - def __init__( - self, - backbone, - transformer, - num_queries, - aux_loss=False, - iter_update=False, - query_dim=2, - num_feature_levels=1, - nheads=8, - # two stage - two_stage_type="no", # ['no', 'standard'] - dec_pred_bbox_embed_share=True, - two_stage_class_embed_share=True, - two_stage_bbox_embed_share=True, - num_patterns=0, - dn_number=100, - dn_box_noise_scale=0.4, - dn_label_noise_ratio=0.5, - dn_labelbook_size=100, - text_encoder_type="bert-base-uncased", - sub_sentence_present=True, - max_text_len=256, - ): - """Initializes the model. - Parameters: - backbone: torch module of the backbone to be used. See backbone.py - transformer: torch module of the transformer architecture. See transformer.py - num_queries: number of object queries, ie detection slot. This is the maximal number of objects - Conditional DETR can detect in a single image. For COCO, we recommend 100 queries. - aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used. - """ - super().__init__() - self.num_queries = num_queries - self.transformer = transformer - self.hidden_dim = hidden_dim = transformer.d_model - self.num_feature_levels = num_feature_levels - self.nheads = nheads - self.max_text_len = 256 - self.sub_sentence_present = sub_sentence_present - - # setting query dim - self.query_dim = query_dim - assert query_dim == 4 - - # for dn training - self.num_patterns = num_patterns - self.dn_number = dn_number - self.dn_box_noise_scale = dn_box_noise_scale - self.dn_label_noise_ratio = dn_label_noise_ratio - self.dn_labelbook_size = dn_labelbook_size - - # bert - self.tokenizer = get_tokenlizer.get_tokenlizer(text_encoder_type) - self.bert = get_tokenlizer.get_pretrained_language_model(text_encoder_type) - self.bert.pooler.dense.weight.requires_grad_(False) - self.bert.pooler.dense.bias.requires_grad_(False) - self.bert = BertModelWarper(bert_model=self.bert) - - self.feat_map = nn.Linear(self.bert.config.hidden_size, self.hidden_dim, bias=True) - nn.init.constant_(self.feat_map.bias.data, 0) - nn.init.xavier_uniform_(self.feat_map.weight.data) - # freeze - - # special tokens - self.specical_tokens = self.tokenizer.convert_tokens_to_ids(["[CLS]", "[SEP]", ".", "?"]) - - # prepare input projection layers - if num_feature_levels > 1: - num_backbone_outs = len(backbone.num_channels) - input_proj_list = [] - for _ in range(num_backbone_outs): - in_channels = backbone.num_channels[_] - input_proj_list.append( - nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - ) - ) - for _ in range(num_feature_levels - num_backbone_outs): - input_proj_list.append( - nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(32, hidden_dim), - ) - ) - in_channels = hidden_dim - self.input_proj = nn.ModuleList(input_proj_list) - else: - assert two_stage_type == "no", "two_stage_type should be no if num_feature_levels=1 !!!" - self.input_proj = nn.ModuleList( - [ - nn.Sequential( - nn.Conv2d(backbone.num_channels[-1], hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - ) - ] - ) - - self.backbone = backbone - self.aux_loss = aux_loss - self.box_pred_damping = box_pred_damping = None - - self.iter_update = iter_update - assert iter_update, "Why not iter_update?" - - # prepare pred layers - self.dec_pred_bbox_embed_share = dec_pred_bbox_embed_share - # prepare class & box embed - _class_embed = ContrastiveEmbed() - - _bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3) - nn.init.constant_(_bbox_embed.layers[-1].weight.data, 0) - nn.init.constant_(_bbox_embed.layers[-1].bias.data, 0) - - if dec_pred_bbox_embed_share: - box_embed_layerlist = [_bbox_embed for i in range(transformer.num_decoder_layers)] - else: - box_embed_layerlist = [ - copy.deepcopy(_bbox_embed) for i in range(transformer.num_decoder_layers) - ] - class_embed_layerlist = [_class_embed for i in range(transformer.num_decoder_layers)] - self.bbox_embed = nn.ModuleList(box_embed_layerlist) - self.class_embed = nn.ModuleList(class_embed_layerlist) - self.transformer.decoder.bbox_embed = self.bbox_embed - self.transformer.decoder.class_embed = self.class_embed - - # two stage - self.two_stage_type = two_stage_type - assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format( - two_stage_type - ) - if two_stage_type != "no": - if two_stage_bbox_embed_share: - assert dec_pred_bbox_embed_share - self.transformer.enc_out_bbox_embed = _bbox_embed - else: - self.transformer.enc_out_bbox_embed = copy.deepcopy(_bbox_embed) - - if two_stage_class_embed_share: - assert dec_pred_bbox_embed_share - self.transformer.enc_out_class_embed = _class_embed - else: - self.transformer.enc_out_class_embed = copy.deepcopy(_class_embed) - - self.refpoint_embed = None - - self._reset_parameters() - - def _reset_parameters(self): - # init input_proj - for proj in self.input_proj: - nn.init.xavier_uniform_(proj[0].weight, gain=1) - nn.init.constant_(proj[0].bias, 0) - - def init_ref_points(self, use_num_queries): - self.refpoint_embed = nn.Embedding(use_num_queries, self.query_dim) - - def forward(self, samples: NestedTensor, targets: List = None, **kw): - """The forward expects a NestedTensor, which consists of: - - samples.tensor: batched images, of shape [batch_size x 3 x H x W] - - samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels - - It returns a dict with the following elements: - - "pred_logits": the classification logits (including no-object) for all queries. - Shape= [batch_size x num_queries x num_classes] - - "pred_boxes": The normalized boxes coordinates for all queries, represented as - (center_x, center_y, width, height). These values are normalized in [0, 1], - relative to the size of each individual image (disregarding possible padding). - See PostProcess for information on how to retrieve the unnormalized bounding box. - - "aux_outputs": Optional, only returned when auxilary losses are activated. It is a list of - dictionnaries containing the two above keys for each decoder layer. - """ - if targets is None: - captions = kw["captions"] - else: - captions = [t["caption"] for t in targets] - len(captions) - - # encoder texts - tokenized = self.tokenizer(captions, padding="longest", return_tensors="pt").to( - samples.device - ) - ( - text_self_attention_masks, - position_ids, - cate_to_token_mask_list, - ) = generate_masks_with_special_tokens_and_transfer_map( - tokenized, self.specical_tokens, self.tokenizer - ) - - if text_self_attention_masks.shape[1] > self.max_text_len: - text_self_attention_masks = text_self_attention_masks[ - :, : self.max_text_len, : self.max_text_len - ] - position_ids = position_ids[:, : self.max_text_len] - tokenized["input_ids"] = tokenized["input_ids"][:, : self.max_text_len] - tokenized["attention_mask"] = tokenized["attention_mask"][:, : self.max_text_len] - tokenized["token_type_ids"] = tokenized["token_type_ids"][:, : self.max_text_len] - - # extract text embeddings - if self.sub_sentence_present: - tokenized_for_encoder = {k: v for k, v in tokenized.items() if k != "attention_mask"} - tokenized_for_encoder["attention_mask"] = text_self_attention_masks - tokenized_for_encoder["position_ids"] = position_ids - else: - # import ipdb; ipdb.set_trace() - tokenized_for_encoder = tokenized - - bert_output = self.bert(**tokenized_for_encoder) # bs, 195, 768 - - encoded_text = self.feat_map(bert_output["last_hidden_state"]) # bs, 195, d_model - text_token_mask = tokenized.attention_mask.bool() # bs, 195 - # text_token_mask: True for nomask, False for mask - # text_self_attention_masks: True for nomask, False for mask - - if encoded_text.shape[1] > self.max_text_len: - encoded_text = encoded_text[:, : self.max_text_len, :] - text_token_mask = text_token_mask[:, : self.max_text_len] - position_ids = position_ids[:, : self.max_text_len] - text_self_attention_masks = text_self_attention_masks[ - :, : self.max_text_len, : self.max_text_len - ] - - text_dict = { - "encoded_text": encoded_text, # bs, 195, d_model - "text_token_mask": text_token_mask, # bs, 195 - "position_ids": position_ids, # bs, 195 - "text_self_attention_masks": text_self_attention_masks, # bs, 195,195 - } - - # import ipdb; ipdb.set_trace() - - if isinstance(samples, (list, torch.Tensor)): - samples = nested_tensor_from_tensor_list(samples) - features, poss = self.backbone(samples) - - srcs = [] - masks = [] - for l, feat in enumerate(features): - src, mask = feat.decompose() - srcs.append(self.input_proj[l](src)) - masks.append(mask) - assert mask is not None - if self.num_feature_levels > len(srcs): - _len_srcs = len(srcs) - for l in range(_len_srcs, self.num_feature_levels): - if l == _len_srcs: - src = self.input_proj[l](features[-1].tensors) - else: - src = self.input_proj[l](srcs[-1]) - m = samples.mask - mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0] - pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype) - srcs.append(src) - masks.append(mask) - poss.append(pos_l) - - input_query_bbox = input_query_label = attn_mask = dn_meta = None - hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer( - srcs, masks, input_query_bbox, poss, input_query_label, attn_mask, text_dict - ) - - # deformable-detr-like anchor update - outputs_coord_list = [] - for dec_lid, (layer_ref_sig, layer_bbox_embed, layer_hs) in enumerate( - zip(reference[:-1], self.bbox_embed, hs) - ): - layer_delta_unsig = layer_bbox_embed(layer_hs) - layer_outputs_unsig = layer_delta_unsig + inverse_sigmoid(layer_ref_sig) - layer_outputs_unsig = layer_outputs_unsig.sigmoid() - outputs_coord_list.append(layer_outputs_unsig) - outputs_coord_list = torch.stack(outputs_coord_list) - - # output - outputs_class = torch.stack( - [ - layer_cls_embed(layer_hs, text_dict) - for layer_cls_embed, layer_hs in zip(self.class_embed, hs) - ] - ) - out = {"pred_logits": outputs_class[-1], "pred_boxes": outputs_coord_list[-1]} - - # # for intermediate outputs - # if self.aux_loss: - # out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord_list) - - # # for encoder output - # if hs_enc is not None: - # # prepare intermediate outputs - # interm_coord = ref_enc[-1] - # interm_class = self.transformer.enc_out_class_embed(hs_enc[-1], text_dict) - # out['interm_outputs'] = {'pred_logits': interm_class, 'pred_boxes': interm_coord} - # out['interm_outputs_for_matching_pre'] = {'pred_logits': interm_class, 'pred_boxes': init_box_proposal} - - return out - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_coord): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - return [ - {"pred_logits": a, "pred_boxes": b} - for a, b in zip(outputs_class[:-1], outputs_coord[:-1]) - ] - - -@MODULE_BUILD_FUNCS.registe_with_name(module_name="groundingdino") -def build_groundingdino(args): - - backbone = build_backbone(args) - transformer = build_transformer(args) - - dn_labelbook_size = args.dn_labelbook_size - dec_pred_bbox_embed_share = args.dec_pred_bbox_embed_share - sub_sentence_present = args.sub_sentence_present - - model = GroundingDINO( - backbone, - transformer, - num_queries=args.num_queries, - aux_loss=True, - iter_update=True, - query_dim=4, - num_feature_levels=args.num_feature_levels, - nheads=args.nheads, - dec_pred_bbox_embed_share=dec_pred_bbox_embed_share, - two_stage_type=args.two_stage_type, - two_stage_bbox_embed_share=args.two_stage_bbox_embed_share, - two_stage_class_embed_share=args.two_stage_class_embed_share, - num_patterns=args.num_patterns, - dn_number=0, - dn_box_noise_scale=args.dn_box_noise_scale, - dn_label_noise_ratio=args.dn_label_noise_ratio, - dn_labelbook_size=dn_labelbook_size, - text_encoder_type=args.text_encoder_type, - sub_sentence_present=sub_sentence_present, - max_text_len=args.max_text_len, - ) - - return model diff --git a/spaces/shimizukawa/python-no-senpai/config.py b/spaces/shimizukawa/python-no-senpai/config.py deleted file mode 100644 index 7a2036e8c655742b945db110f72a8c593e14cd53..0000000000000000000000000000000000000000 --- a/spaces/shimizukawa/python-no-senpai/config.py +++ /dev/null @@ -1,35 +0,0 @@ -import os - - -SAAS = True - - -def get_db_config(): - url = os.environ["QDRANT_URL"] - api_key = os.environ["QDRANT_API_KEY"] - collection_name = "document-search" - return url, api_key, collection_name - - -def get_local_db_congin(): - url = "localhost" - # api_key = os.environ["QDRANT_API_KEY"] - collection_name = "document-search" - return url, None, collection_name - - -def get_index_names(): - keys = [ - k for k in [ - k.strip().lower() - for k in os.environ.get("INDEX_NAMES", "").split(",") - ] - if k - ] - if not keys: - keys = ["INDEX_NAMES is empty"] - return keys - - -DB_CONFIG = get_db_config() if SAAS else get_local_db_congin() -INDEX_NAMES = get_index_names() \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Call of Duty Mobile Terbaru and Join the World Championship 2023 - Compete with the Best Players Around the Globe.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Call of Duty Mobile Terbaru and Join the World Championship 2023 - Compete with the Best Players Around the Globe.md deleted file mode 100644 index 84e7dee184896987c936d6fb1e4da24909444204..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Call of Duty Mobile Terbaru and Join the World Championship 2023 - Compete with the Best Players Around the Globe.md +++ /dev/null @@ -1,106 +0,0 @@ - -

          Download Call of Duty Mobile Terbaru: A Guide for Gamers

          -

          If you are a fan of first-person shooter (FPS) games, you might have heard of Call of Duty, one of the most popular and successful franchises in the genre. But did you know that you can also play Call of Duty on your mobile device? That's right, Call of Duty Mobile is a free-to-play game that brings the thrill of Call of Duty to your phone or tablet. In this article, we will tell you everything you need to know about Call of Duty Mobile, how to download it, and some tips and tricks to help you win.

          -

          What is Call of Duty Mobile?

          -

          Call of Duty Mobile is a mobile version of the iconic FPS franchise that has been around since 2003. It was released in 2019 by Activision and Tencent Games, and has since become one of the most downloaded and played mobile games in the world. It offers an immersive and action-packed gaming experience that rivals console and PC games.

          -

          download call of duty mobile terbaru


          Download Zip >>>>> https://ssurll.com/2uNTnI



          -

          A mobile version of the popular FPS franchise

          -

          Call of Duty Mobile is not just a port or a spin-off of the existing Call of Duty games. It is a standalone game that features its own storyline, characters, weapons, and modes. It also incorporates elements from various Call of Duty titles, such as Black Ops and Modern Warfare, as well as original content exclusive to Call of Duty Mobile. You can play as iconic operators like Captain Price, Ghost, Soap, and more.

          -

          Features multiplayer and battle royale modes

          -

          Call of Duty Mobile has two main modes: multiplayer and battle royale. Multiplayer mode lets you compete with other players in classic game modes such as Team Deathmatch, Domination, and Kill-Confirmed on iconic maps such as Nuketown, Crash, and Hijacked. You can also play ranked matches to climb the leaderboards and earn rewards.

          -

          Battle royale mode lets you survive in a 100-player match on a large map that includes locations from previous Call of Duty games. You can choose to play solo, duo, or squad, and customize your loadout, class, and vehicle. You can also find weapons, armor, perks, and air drops on the map. The last player or team standing wins.

          -

          Offers seasonal content and customization options

          -

          Call of Duty Mobile is constantly updated with new content every season. Each season brings new game modes, maps, events, rewards, and challenges for you to enjoy. You can also unlock and earn dozens of operators, weapons, skins, outfits, scorestreaks, and other gear that you can use to customize your loadout and appearance. You can also use Gunsmith to boost your weapons with attachments and camos.

          -

          How to Download Call of Duty Mobile Terbaru?

          -

          If you are interested in playing Call of Duty Mobile, here are the steps you need to follow to download it on your device.

          -

          Available for Android and iOS devices

          -

          Call of Duty Mobile is compatible with both Android and iOS devices. You can download it from the Google Play Store or the App Store for free. Just search for "Call of Duty Mobile" or use these links:

          - -

          Requires internet connection and sufficient storage space

          -

          To play Call of Duty Mobile, you will need an internet connection as it is an online game

          To play Call of Duty Mobile, you will need an internet connection as it is an online game. You will also need enough storage space on your device to install the game and its updates. The game size varies depending on your device, but it is around 2 GB for Android and 3 GB for iOS. You can check the available space on your device by going to Settings > Storage.

          -

          download call of duty mobile season 5 apk
          -download call of duty mobile legends of war android
          -download call of duty mobile season 6 update
          -download call of duty mobile latest version for pc
          -download call of duty mobile mod apk unlimited money
          -download call of duty mobile from uptodown
          -download call of duty mobile obb file
          -download call of duty mobile on google play
          -download call of duty mobile hack version
          -download call of duty mobile offline mode
          -download call of duty mobile highly compressed
          -download call of duty mobile beta test server
          -download call of duty mobile chinese version
          -download call of duty mobile for ios
          -download call of duty mobile with controller support
          -download call of duty mobile zombie mode
          -download call of duty mobile emulator for windows 10
          -download call of duty mobile apk and data
          -download call of duty mobile free cp generator
          -download call of duty mobile new map slums
          -download call of duty mobile gameloop emulator
          -download call of duty mobile redeem codes 2023
          -download call of duty mobile lite apk pure
          -download call of duty mobile on macbook air
          -download call of duty mobile wallpaper hd
          -download call of duty mobile aimbot script
          -download call of duty mobile vietnam version
          -download call of duty mobile garena apk
          -download call of duty mobile esports tournament
          -download call of duty mobile voice chat app
          -download call of duty mobile night mode maps
          -download call of duty mobile bluestacks settings
          -download call of duty mobile nox player
          -download call of duty mobile best sensitivity settings
          -download call of duty mobile clan wars rewards
          -download call of duty mobile new operators ghost and price
          -download call of duty mobile tips and tricks 2023
          -download call of duty mobile system requirements android
          -download call of duty mobile custom rooms live stream
          -download call of duty mobile new weapons ak117 and m4lmg
          -download call of duty mobile battle pass season 6 leaks
          -download call of duty mobile ranked mode rewards and tiers
          -download call of duty mobile new scorestreaks sentry gun and sam turret
          -download call of duty mobile new perks overkill and shrapnel
          -download call of duty mobile new game modes capture the flag and cranked
          -download call of duty mobile new killstreaks nuke and vtol
          -download call of duty mobile new skins and outfits for operators
          -download call of duty mobile new attachments and camos for weapons
          -download call of duty mobile new events and challenges for season 6

          -

          Supports controller pairing and voice chat

          -

          If you prefer to play Call of Duty Mobile with a controller, you can pair your device with a compatible Bluetooth controller. You can find the list of supported controllers and the instructions on how to pair them on the official website. You can also adjust the controller settings in the game options.

          -

          If you want to communicate with your teammates or friends, you can use the voice chat feature in Call of Duty Mobile. You can enable or disable the voice chat in the game settings, and choose whether to use the push-to-talk or open mic mode. You can also mute or unmute other players in the lobby or in-game.

          -

          Tips and Tricks for Call of Duty Mobile

          -

          Now that you have downloaded Call of Duty Mobile, you might be wondering how to improve your skills and win more matches. Here are some tips and tricks that can help you become a better player.

          -

          Choose the best controls and settings

          -

          One of the first things you should do is to customize your controls and settings according to your preference and device. You can choose between simple or advanced mode for your shooting controls, and adjust the sensitivity, layout, and size of your buttons. You can also enable or disable features such as aim assist, gyroscope, auto-sprint, and quick run.

          -

          You should also optimize your graphics and audio settings to ensure a smooth and immersive gameplay. You can change the frame rate, resolution, brightness, and sound effects of the game depending on your device's performance and battery life.

          -

          Use your mini-map and cover

          -

          A key skill in any FPS game is to be aware of your surroundings and enemies. You can use your mini-map in the top left corner of the screen to see where you and your allies are, as well as where the enemies are shooting from. You can also see the objectives, scorestreaks, and vehicles on the map.

          -

          You should also use cover as much as possible to avoid getting hit by enemy fire. You can crouch or prone behind walls, crates, cars, or other objects to reduce your exposure. You can also slide or jump to dodge bullets or surprise your enemies.

          -

          Reload only when necessary and switch to pistol

          -

          A common mistake that many players make is to reload their weapons too often or at the wrong time. Reloading takes time and leaves you vulnerable to enemy attacks. You should only reload when you are safe or out of ammo, and not when you are in the middle of a firefight.

          -

          If you run out of ammo or need a faster weapon, you can switch to your pistol instead of reloading. Your pistol has unlimited ammo and is quicker to draw than your primary weapon. You can also use it to finish off wounded enemies or as a backup weapon.

          -

          Communicate with your team and use your abilities strategically

          -

          Call of Duty Mobile is a team-based game, so communication and coordination are essential for victory. You can use the voice chat or text chat to communicate with your team members, and share information such as enemy locations, strategies, requests, or warnings. You can also use the ping system to mark enemies, items, or locations on the map.

          -

          You should also use your abilities wisely and according to the situation. Each operator has a unique ability that can give you an edge in combat, such as a shield, a drone, a turret, or a stealth suit. Each class in battle royale mode also has a special skill that can help you survive, such as a medic kit, a grappling hook, a scout sensor, or a ninja smoke bomb. You should use these abilities when they are available and when they are most effective.

          -

          Conclusion

          -

          Call of Duty Mobile is an amazing game that lets you enjoy the thrill of Call of Duty on your mobile device. It has everything you need for an exciting and satisfying gaming experience: stunning graphics, smooth gameplay, diverse modes, rich content, and customization options. It is also easy to download and play for free.

          -

          If you want to download Call of Duty Mobile terbaru (the latest version), just follow the steps we have outlined above. And if you want to improve your skills and win more matches, just follow our tips and tricks. We hope this article has been helpful for you. Happy gaming!

          -

          FAQ

          FAQs

          -

          Here are some frequently asked questions about Call of Duty Mobile and their answers.

          -

          Q: How can I play Call of Duty Mobile with my friends?

          -

          A: You can play Call of Duty Mobile with your friends by adding them to your friends list and inviting them to join your lobby. You can also create or join a clan and play with your clan members. You can also use the social media or QR code features to add friends from other platforms.

          -

          Q: How can I get more credits and COD points in Call of Duty Mobile?

          -

          A: You can get more credits and COD points by playing the game and completing missions, events, and challenges. You can also get them by watching ads, participating in lucky draws, or purchasing them with real money.

          -

          Q: How can I report a hacker or a cheater in Call of Duty Mobile?

          -

          A: You can report a hacker or a cheater in Call of Duty Mobile by tapping on their name in the lobby or in-game and selecting the report option. You can also report them by using the in-game feedback system or contacting the customer service.

          -

          Q: How can I update Call of Duty Mobile to the latest version?

          -

          A: You can update Call of Duty Mobile to the latest version by checking for updates on the Google Play Store or the App Store. You can also enable the auto-update feature on your device to download the updates automatically.

          -

          Q: How can I contact the Call of Duty Mobile support team?

          -

          A: You can contact the Call of Duty Mobile support team by using the in-game feedback system, visiting the official website, or sending an email to codm.helpshift.com.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Denah Bus Isi 60 untuk Rombongan Wisata Anda.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Denah Bus Isi 60 untuk Rombongan Wisata Anda.md deleted file mode 100644 index e99b41cc19575b676ba837aa2ba6fe49697a7797..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Denah Bus Isi 60 untuk Rombongan Wisata Anda.md +++ /dev/null @@ -1,156 +0,0 @@ - -

          Download Denah Bus Isi 60: A Guide to Bus Layout and Safety

          -

          If you are planning to travel by bus, whether for a tour, a school trip, or a business meeting, you might want to know more about the layout and safety of the bus. One way to do that is to download denah bus isi 60, which is a diagram of the bus seat arrangement and capacity. In this article, we will explain what denah bus isi 60 is, how to download it, how to use it, and how to stay safe on a bus.

          -

          download denah bus isi 60


          Download Filehttps://ssurll.com/2uO1si



          -

          What is Denah Bus Isi 60?

          -

          Denah bus isi 60 is a term that means "bus layout with 60 seats" in Indonesian. It is a diagram that shows the number, position, and configuration of the seats on a bus. It also indicates the location of the doors, windows, toilets, smoking areas, and other features of the bus.

          -

          The meaning and purpose of denah bus isi 60

          -

          The meaning of denah bus isi 60 is to provide information about the bus that you are going to ride or rent. It can help you to choose the best seat for your comfort, convenience, and preference. It can also help you to avoid confusion and conflict with other passengers when boarding or getting off the bus.

          -

          The types and features of denah bus isi 60

          -

          There are different types of denah bus isi 60 depending on the size, shape, and design of the bus. Some common types are:

          -
            -
          • Big bus seat 2-2: This type of bus has two seats on each side of the aisle, with a total capacity of 50 seats. It usually has a toilet and a smoking area at the back of the bus.
          • -
          • Big bus seat 2-3: This type of bus has two seats on one side of the aisle and three seats on the other side, with a total capacity of 59 or 61 seats. It usually has a toilet and a smoking area at the middle or the back of the bus.
          • -
          • Junior bus seat 2-2: This type of bus has two seats on each side of the aisle, with a total capacity of 39 or 41 seats. It is smaller than a big bus but larger than a medium bus.
          • -
          • Medium bus seat 2-2: This type of bus has two seats on each side of the aisle, with a total capacity of 31 to 37 seats. It is smaller than a junior bus but larger than a microbus.
          • -
          • Microbus (elf): This type of bus has two or three seats on each side of the aisle, with a total capacity of 13 to 19 seats. It is also known as an elf or an Isuzu elf.
          • -
          -

          The features of denah bus isi 60 may vary depending on the model, brand, and condition of the bus. Some common features are:

          -
            -
          • Air conditioning: Most buses have air conditioning systems to regulate the temperature inside the bus.
          • -
          • Audio system: Most buses have speakers and microphones to play music or make announcements.TV and DVD: Some buses have TVs and DVD players to show movies or videos.
          • -
          • Reclining seats: Some buses have seats that can be adjusted to a reclining position for more comfort.
          • -
          • Seat belts: Some buses have seat belts to secure the passengers in case of an accident.
          • -
          • Wi-Fi: Some buses have Wi-Fi access to allow the passengers to connect to the internet.
          • -
          • USB ports: Some buses have USB ports to allow the passengers to charge their devices.
          • -
          -

          The benefits and advantages of denah bus isi 60

          -

          The benefits and advantages of denah bus isi 60 are:

          -
            -
          • It can help you to plan your trip better by knowing the layout and capacity of the bus.
          • -
          • It can help you to book your seat in advance by knowing the availability and price of the seats.
          • -
          • It can help you to enjoy your trip more by choosing the seat that suits your needs and preferences.
          • -
          • It can help you to avoid problems and disputes with other passengers by knowing the rules and etiquette of the bus.
          • -
          • It can help you to stay safe and comfortable on the bus by knowing the features and facilities of the bus.
          • -
          -

          How to Download Denah Bus Isi 60?

          -

          If you want to download denah bus isi 60, you need to find a reliable source, follow the steps, and check the examples. Here are some tips on how to do that:

          -

          The sources and websites for downloading denah bus isi 60

          -

          There are many sources and websites that offer denah bus isi 60 for free or for a fee. Some of them are:

          -
            -
          • Bus rental companies: If you are renting a bus, you can ask the bus rental company for the denah bus isi 60 of the bus that you are going to use. They may provide it on their website, email, or WhatsApp.
          • -
          • Bus ticket agents: If you are buying a bus ticket, you can ask the bus ticket agent for the denah bus isi 60 of the bus that you are going to board. They may provide it on their website, email, or WhatsApp.
          • -
          • Bus operators: If you are traveling by a public bus, you can ask the bus operator for the denah bus isi 60 of the bus that you are going to ride. They may provide it on their website, email, or WhatsApp.
          • -
          • Online platforms: If you are looking for a general denah bus isi 60, you can search for it on online platforms such as Google, Pinterest, or Instagram. They may have various images and links of denah bus isi 60 from different sources.
          • -
          -

          The steps and tips for downloading denah bus isi 60

          -

          The steps and tips for downloading denah bus isi 60 are:

          -
            -
          1. Choose your source and website: Based on your purpose and preference, choose the source and website that offers the denah bus isi 60 that you need.
          2. -
          3. Find your denah bus isi 60: On the website, find the denah bus isi 60 that matches your criteria such as type, size, model, brand, and condition of the bus.
          4. -
          5. Download your denah bus isi 60: On the website, click on the download button or link to download your denah bus isi 60. You may need to register, log in, or pay a fee before downloading. You may also need to choose a format such as PDF, JPG, or PNG before downloading.
          6. -
          7. Save your denah bus isi 60: On your device, save your denah bus isi 60 in a folder or location that is easy to access and remember. You may also want to rename your file or add a description for easier identification.
          8. -
          9. Print your denah bus isi 60: If you want to have a hard copy of your denah bus isi 60, you can print it using a printer or a printing service. You may want to adjust the size, resolution, or orientation of your file before printing.
          10. -
          -

          The examples and images of denah bus isi 60

          -

          To give you an idea of how denah bus isi 60 looks like, here are some examples and images of different types of denah bus isi 60:

          -

          download denah kursi bus pariwisata seat 2-3 60 seat
          -download denah seat bus tanpa toilet dan smoking area
          -download gambar denah kursi bus pariwisata format excel
          -download file excel denah kursi bus pariwisata gratis
          -download denah kursi bus pariwisata sesuai kapasitas bus
          -download denah kursi bus pariwisata dan isuzu elf
          -download denah kursi bus pariwisata untuk rombongan wisata
          -download denah kursi bus pariwisata dengan pintu belakang di tengah
          -download denah kursi bus pariwisata dengan pintu belakang di belakang
          -download denah kursi bus pariwisata jenis big bus seat 2-2
          -download denah kursi bus pariwisata jenis big bus seat 2-3
          -download denah kursi bus pariwisata jenis junior bus seat 2-2
          -download denah kursi bus pariwisata jenis medium bus seat 2-2
          -download denah kursi bus pariwisata jenis micro bus elf short
          -download denah kursi bus pariwisata jenis micro bus elf long
          -download contoh denah kursi bus pariwisata dari scribd
          -download contoh denah kursi bus pariwisata dari prodesae
          -download contoh denah kursi bus pariwisata dari dokumen.tips
          -download cara membuat denah kursi bus pariwisata dengan word
          -download cara membuat denah kursi bus pariwisata dengan excel
          -download tips menentukan posisi tempat duduk peserta wisata
          -download tips mengatur nomor kursi sesuai letak pintu belakang
          -download tips memilih biro wisata atau PT bus yang tepat
          -download tips mengetahui jenis dan kapasitas bus yang digunakan
          -download tips menghindari perselisihan saat menaiki bus pariwisata

          - - - - -
          Big bus seat 2-2Big bus seat 2-3
          Junior bus seat 2-2Medium bus seat 2-2
          Microbus (elf)
          -

          How to Use Denah Bus Isi 60?

          -

          Once you have downloaded denah bus isi 60, you can use it for various purposes and functions. Here are some tips on how to use denah bus isi 60:

          -

          The applications and functions of denah bus isi 60

          -

          Some of the applications and functions of denah bus isi 60 are:

          -
            -
          • To choose your seat: You can use denah bus isi 60 to select the seat that you want to sit on the bus. You can compare the seats based on their position, view, space, and accessibility. You can also check the availability and price of the seats before booking.
          • -
          • To plan your trip: You can use denah bus isi 60 to plan your trip itinerary and schedule. You can estimate the time and distance of your travel based on the route and speed of the bus. You can also prepare your luggage and belongings based on the space and storage of the bus.
          • -
          • To communicate with others: You can use denah bus isi 60 to communicate with other passengers, drivers, or guides on the bus. You can share your seat number, location, or destination with them. You can also ask for help or assistance if you have any problems or needs on the bus.
          • -
          • To learn more about the bus: You can use denah bus isi 60 to learn more about the history, culture, and features of the bus. You can appreciate the design, style, and decoration of the bus. You can also discover the stories, legends, and facts behind the bus.
          • -
          -

          The instructions and guidelines for using denah bus isi 60

          -

          Some of the instructions and guidelines for using denah bus isi 60 are:

          -
            -
          1. Read your denah bus isi 60 carefully: Before using your denah bus isi 60, make sure that you read it carefully and understand it well. Pay attention to the details, symbols, and labels on the diagram. If you have any questions or doubts, ask for clarification or confirmation.
          2. -
          3. Follow your denah bus isi 60 correctly: When using your denah bus isi 60, make sure that you follow it correctly and accurately. Do not deviate from your chosen seat or route unless necessary or permitted. Do not misuse or abuse your denah bus isi 60 for any illegal or unethical purposes.
          4. -
          5. Respect your denah bus isi 60 politely: When using your denah bus isi 60, make sure that you respect it politely and courteously. Do not damage or deface your denah bus isi 60 in any way. Do not disturb or annoy other passengers or staff with your denah bus isi 60. Do not lose or forget your denah bus isi 60 on the bus.
          6. -
          -

          The best practices and recommendations for using denah bus isi 60

          -

          Some of the best practices and recommendations for using denah bus isi 60 are:

          -
            -
          • Be flexible and adaptable: When using your denah bus isi 60, be prepared for any changes or challenges that may occur on the bus. Be flexible and adaptable to the situation and the environment. Be ready to adjust your seat or route if needed or requested.
          • -
          • Be smart and savvy: When using your denah bus isi 60, be aware of the risks and opportunities that may arise on the bus. Be alert and vigilant to any threats or dangers that may affect your safety or security. Be proactive and resourceful to any opportunities or benefits that may enhance your experience or satisfaction.
          • -
          • Be friendly and sociable: When using your denah bus isi 60, be open and willing to interact with other passengers or staff on the bus. Be polite and respectful to their opinions, preferences, and needs. Be helpful and cooperative to their requests, problems, or situations.
          • -
          -

          How to Stay Safe on a Bus?

          -

          Besides using denah bus isi 60, there are other ways to stay safe on a bus. Here are some tips on how to do that:

          -

          The importance and awareness of bus safety

          -

          Bus safety is important because it can prevent accidents, injuries, or fatalities that may occur on the bus. Bus safety is also important because it can protect your health, well-being, and belongings on the bus. Therefore, you should be aware of the potential hazards and risks that may affect your bus safety, such as:

          -
            -
          • Bad weather conditions: Rain, snow, fog, wind, or storm can affect the visibility, stability, and speed of the bus.
          • -
          • Poor road conditions: Potholes, bumps, cracks, or debris can affect the smoothness, comfort, and balance of the bus.
          • -
          • Traffic congestion: Traffic jams, accidents, or roadblocks can affect the time, distance, and route of the bus.
          • -
          • Mechanical problems: Breakdowns, malfunctions, or failures can affect the performance, function, and operation of the bus.
          • -
          • Human errors: Mistakes, negligence, or recklessness can affect the judgment, skill, and behavior of the driver or other passengers.
          • -
          • Criminal activities: Theft, robbery, vandalism, or violence can affect the safety, security, and peace of the passengers or staff.
          • -
          -

          The rules and regulations for bus safety

          -

          To ensure bus safety, you should follow the rules and regulations that are set by the bus company, the government, or the law. Some of them are:

          -
            -
          • Wear your seat belt: You should always wear your seat belt when you are on the bus. It can protect you from being thrown out of your seat in case of a sudden stop or a collision.
          • -
          • Stay in your seat: You should not move around or change your seat when the bus is moving. It can cause you to lose your balance or fall down. It can also distract or disturb the driver or other passengers.
          • -
          • Keep your luggage secure: You should not leave your luggage unattended or loose on the bus. It can cause damage or injury to yourself or others. It can also be stolen or lost by someone else.
          • -
          • Follow the instructions: You should listen to and follow the instructions given by the driver or the guide on the bus. They can inform you about the schedule, route, stops, or emergencies on the bus.
          • -
          • Respect the signs: You should obey and respect the signs posted on the bus. They can indicate the rules, warnings, or information about the bus.
          • -
          -

          The tips and advice for bus safety

          -

          To enhance bus safety, you should also follow some tips and advice that are based on common sense, experience, or wisdom. Some of them are:

          -
            -
          • Choose a reputable bus company: You should do some research and comparison before choosing a bus company for your trip. You should check their reputation, quality, safety, and service of the bus company. You should also read the reviews and feedback from other customers.
          • -
          • Check the condition of the bus: You should inspect the condition of the bus before boarding it. You should look for any signs of damage, wear, or dirt on the bus. You should also test the features and facilities of the bus such as the air conditioning, audio system, TV, DVD, reclining seats, seat belts, Wi-Fi, USB ports, etc.
          • -
          • Bring your essentials: You should pack your essentials for your trip such as your ID, ticket, money, phone, charger, water, snacks, medicine, etc. You should also bring some entertainment or activities for yourself such as a book, a magazine, a game, a music player, etc.
          • -
          • Be alert and aware: You should be alert and aware of your surroundings and situation on the bus. You should watch out for any suspicious or unusual people or activities on the bus. You should also be aware of the emergency exits and procedures on the bus.
          • -
          • Be courteous and considerate: You should be courteous and considerate to the driver and other passengers on the bus. You should not make loud noises or cause disturbances on the bus. You should also not litter or damage anything on the bus.
          • -
          -

          Conclusion

          -

          In conclusion, denah bus isi 60 is a useful tool for anyone who travels by bus. It can help you to know more about the layout and safety of the bus. It can also help you to choose your seat, plan your trip, communicate with others, learn more about the bus, and stay safe on the bus. To download denah bus isi 60, you need to find a reliable source, follow the steps, and check the examples. To use denah bus isi 60, you need to follow the rules, instructions, and guidelines. You also need to follow some tips and advice to enhance your experience and satisfaction. We hope that this article has helped you to understand and appreciate denah bus isi 60 better.

          -

          FAQs

          -

          Here are some frequently asked questions about denah bus isi 60:

          -

          What is the difference between denah bus isi 60 and denah bis 60?

          -

          Denah bus isi 60 and denah bis 60 are both terms that mean "bus layout with 60 seats" in Indonesian. However, denah bus isi 60 is more commonly used than denah bis 60. Denah bis 60 is more formal and less popular than denah bus isi 60.

          -

          How can I get a custom denah bus isi 60?

          -

          If you want to get a custom denah bus isi 60 that suits your specific needs and preferences, you can contact a professional designer or a graphic artist who can create one for you. You can provide them with your requirements and specifications such as the type, size, model, brand, condition, features, and facilities of the bus. You can also give them some examples or images of denah bus isi 60 that you like or dislike. They can charge you a fee depending on the complexity and quality of the work.

          -

          How can I share my denah bus isi 60 with others?

          -

          If you want to share your denah bus isi 60 with others, you can use various methods and platforms such as email, WhatsApp, Facebook, Instagram, Twitter, etc. You can attach your denah bus isi 60 file or link to your message or post. You can also add some captions or hashtags to describe or promote your denah bus isi 60. You can also ask for feedback or comments from others about your denah bus isi 60.

          -

          How can I update my denah bus isi 60?

          -

          If you want to update your denah bus isi 60, you can use some tools and software such as Adobe Photoshop, Microsoft Word, Paint, etc. You can edit, modify, or enhance your denah bus isi 60 file according to your needs and preferences. You can change the color, size, shape, or style of your denah bus isi 60. You can also add or remove some elements or features of your denah bus isi 60.

          -

          How can I learn more about denah bus isi 60?

          -

          If you want to learn more about denah bus isi 60, you can do some research and reading on the internet or in books. You can search for more sources and websites that offer denah bus isi 60. You can also read more articles, blogs, or reviews about denah bus isi 60. You can also watch some videos, podcasts, or webinars about denah bus isi 60. You can also join some forums, groups, or communities that discuss denah bus isi 60.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/singhk28/nocodeml/app.py b/spaces/singhk28/nocodeml/app.py deleted file mode 100644 index 7d45ab2c88c49f7db252c27c13d8df04da40caa1..0000000000000000000000000000000000000000 --- a/spaces/singhk28/nocodeml/app.py +++ /dev/null @@ -1,309 +0,0 @@ -# Module Imports -import pandas as pd -import numpy as np -import streamlit as st -from pycaret import regression as reg -from pycaret import classification as clf -from sklearn.metrics import mean_absolute_error, max_error, r2_score, mean_squared_error, confusion_matrix, ConfusionMatrixDisplay, accuracy_score -import matplotlib.pyplot as plt -import streamlit.components.v1 as components -import mpld3 -import time -# ---------------------------------------------------------------------------------------------------------------------- # -# Settings: -st.set_option('deprecation.showPyplotGlobalUse', False) - -# ---------------------------------------------------------------------------------------------------------------------- # -# Collecting User Input -## Preamble & Formatting - -st.markdown(f'

          {"No Code ML"}

          ', unsafe_allow_html=True) -col1, mid, col2 = st.columns([10,1,20]) -with col1: - st.image('https://images.pexels.com/photos/2599244/pexels-photo-2599244.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1') -with col2: - st.markdown("""This tool prepares a machine learning model using your tabular data. The tool can be used in 2 ways:""", unsafe_allow_html=True) - st.markdown("""1) Benchmark different algorithms for your dataset to find the best algorithm and then tune that model to determine best hyperparameters.""", unsafe_allow_html=True) -st.markdown("""2) In the case of experimental science, the best obtained model can be used to make predictions for various combinations of the provided data to try to obtain a combination that achieves a desired target value (if possible).""", unsafe_allow_html=True) -st.markdown("""**The tool is currently under active development. Please direct any bug reports or inquiries to the clean energy lab at UofT.**""", unsafe_allow_html=True) -st.markdown("""---""") - -st.markdown(f"**To use this tool**, fill out all the requested fields from top to bottom.") -st.markdown(f"**Note:** If an error is obtained refresh the page and start over.") - -## Column Name -st.markdown(f'

          {"1) Provide name of the column you want to predict with model."}

          ', unsafe_allow_html=True) -target_col = st.text_input("Enter the exact name of the column with your target variable. This field is case sensitive. (i.e., capital letters must match.)") - -## Task Type: Regression or Classification -st.markdown(f'

          {"2) Select type of model you would like to build"}

          ', unsafe_allow_html=True) -mod_type = st.selectbox("What type of model would you like to train? Pick regression model for continous values or classifier for categorical values.", ('regression', 'classifier')) - -## Mode of Use -st.markdown(f'

          {"3) Select mode of use"}

          ', unsafe_allow_html=True) -mode_type = st.selectbox("What would you like to use the tool for?", ('Benchmarking (finding the best algorithm for your problem)', 'Parameter Search (find combination of parameters to get a desired value)')) -if mode_type == 'Parameter Search (find combination of parameters to get a desired value)': - ## Desired Target Value - if mod_type == 'classifier': - st.write('Parameter search is not currently supported with classifier type models.') - st.write('Please refresh page and try again with the supported tasks.') - exit() - - if mod_type == 'regression': - st.markdown(f'

          {"4) Type of parameter search"}

          ', unsafe_allow_html=True) - opt_type = st.selectbox("What do you want to do with the output?", ('Maximize it', 'Minimize it', 'Obtain a desired value')) - if opt_type == 'Obtain a desired value': - desired_value = float(st.number_input("Enter the desired value for the target variable.")) - - ## Ask for Dataset - st.markdown(f'

          {"5) Upload CSV file "}

          ', unsafe_allow_html=True) - uploaded_file = st.file_uploader("Upload a CSV file", type="csv") - -else: - ## Ask for Dataset - st.markdown(f'

          {"4) Upload CSV file "}

          ', unsafe_allow_html=True) - uploaded_file = st.file_uploader("Upload a CSV file", type="csv") - -# ---------------------------------------------------------------------------------------------------------------------- # -if uploaded_file: - # Read CSV File and Provide Preview of Data and Statistical Summary: - data = pd.read_csv(uploaded_file) - data_size = len(data) - - if target_col not in list(data.columns): - st.error("ERROR: Provided name of the target column is not in the CSV file. Please make sure you provide the exact match (case sensitive).Please provide the correct label and try again.") - exit() - - st.subheader("Data preview:") - st.write(data.head()) - st.subheader("Statistical Summary of the Provided Data:") - st.write(data.describe()) - - # Prepare Train/Test Split: - fraction_check = 10_000/data_size # Cap Training Dataset to 10k to allow tuning to occur in a timely manner - if fraction_check < 0.8: - train_frac = fraction_check - train_data = data.sample(frac=train_frac, random_state=0) - test_data = data.drop(train_data.index) - if len(test_data) > 5_000: - test_data = test_data[0:5000] - else: - train_frac = 0.8 - train_data = data.sample(frac=train_frac, random_state=0) - test_data = data.drop(train_data.index) - - # Figure out Column Data Types - object_columns = data.select_dtypes(include="object").columns.tolist() - - # Create a list of Tree Models: - tree_mods_list = ['Extra Trees Regressor', 'Extra Trees Classifier', 'Random Forest Regressor', 'Random Forest Classifier', 'Decision Tree Regressor', 'Decision Tree Classifier', 'CatBoost Regressor', 'Light Gradient Boosting Machine'] - -# ---------------------------------------------------------------------------------------------------------------------- # - # Build Regression Model - if mod_type == "regression": - # Setup Regressor Problem - if object_columns: - if data_size > 20: - s = reg.setup(train_data, target = target_col, normalize=True, categorical_features=object_columns, fold=5, silent= True) - else: - s = reg.setup(data, target = target_col, normalize=True, categorical_features=object_columns, silent= True) - else: - if data_size > 20: - s = reg.setup(train_data, target = target_col, normalize=True, silent= True, fold=5) - else: - s = reg.setup(data, target = target_col, normalize=True, silent= True) - - # Find the best algorithm to build Model: - st.subheader("Algorithm Selection") - start_algo = time.time() - with st.spinner(text="Finding the best algorithm for your dataset..."): - best_mod = reg.compare_models() - regression_results = reg.pull() - best_mod_name = regression_results.Model[0] - st.write(regression_results) - end_algo = time.time() - st.write('Time taken to select algorithm:', end_algo - start_algo, 'seconds') - - # Tune the hyperparameters for the best algorithm: - st.subheader("Tuning the Model") - start_tune = time.time() - with st.spinner(text="Tuning the algorithm..."): - tuned_mod = reg.tune_model(best_mod, optimize = 'RMSE', n_iter=5) - end_tune = time.time() - st.write('Time taken to select hyperparameters:', end_tune - start_tune, 'seconds') - - # Finalize the model (Train on the entire train dataset): - with st.spinner("Finalizing the model..."): - final_mod = reg.finalize_model(tuned_mod) - - st.success('Model successfully trained! Here are your results:') - st.write('Best algorithm: ', best_mod_name) - st.write('Best hyperparameters: ', final_mod.get_params()) - - # Print a SHAP Analysis Summary Plot: - if best_mod_name in tree_mods_list: - st.subheader("SHAP Analysis Summary Plot") - st.pyplot(reg.interpret_model(final_mod)) - - if len(data) > 20: - # Predict on the test set if it was created: - st.subheader("Evaluating model on the test/hold out data:") - predictions = reg.predict_model(final_mod, data=test_data) - st.success('Here are your results:') - st.write(predictions) - st.caption('"Label" is the value predicted by the model.') - - # Accuracy of predictions: - MAE_val = mean_absolute_error(predictions[target_col], predictions['Label']) - RMSE_err = mean_squared_error(predictions[target_col], predictions['Label'], squared=False) - Max_err = max_error(predictions[target_col], predictions['Label']) - r2_val = r2_score(predictions[target_col], predictions['Label']) - err_dict = {'Mean Absolute Error': MAE_val, 'Root Mean Squared Error': RMSE_err, 'Maximum Error': Max_err} - df_err = pd.DataFrame(err_dict, index=[0]) - st.write(df_err) - - # Create an true vs. predicted plot - fig = plt.figure(figsize=(8,8)) - plt.grid(b=None) - plt.scatter(x=predictions[target_col], y=predictions['Label']) - plt.xlabel("True Value", fontsize=18) - plt.ylabel("Predicted Value", fontsize=18) - fig_html = mpld3.fig_to_html(fig) - components.html(fig_html, height=1000) - -# ---------------------------------------------------------------------------------------------------------------------- # - # Use Trained Model to Explore Parameter Space - if mode_type == 'Parameter Search (find combination of parameters to get a desired value)': - if object_columns: - st.write("Optimization with string data types not currently supported.") - else: - with st.spinner("Generating parameter combinations for search"): - # Creating Variables for Data Generation Used in the Optimization Segment - list_of_cols = list(data.columns[0:-1]) - - # Find min and max value for the input features in the training dataset - max_list = [data[i].max() for i in list_of_cols] - min_list = [data[i].min() for i in list_of_cols] - - # Generate DF from New Parameters - generated_data = np.array([np.random.randint(low=min_list[i], high=max_list[i], size=50_000) for i in range(0,len(max_list))]).T - generated_data_df = pd.DataFrame(generated_data, columns = list_of_cols) - - # Make Predictions with Trained Model - generated_predictions = reg.predict_model(final_mod, data = generated_data_df) - - if opt_type == 'Obtain a desired value': - st.subheader("Using the trained model to obtain the desired target value:") - - # Filter results to get the places closed to desired value - ## Determine +/- window for search - data_spread = data[target_col].std()/3 - dv_min = desired_value - data_spread - dv_max = desired_value + data_spread - - ## Apply +/- window to determine lower and upper bound to filter DF ('Generated_predictions) - lower_bound = generated_predictions["Label"] >=dv_min - upper_bound = generated_predictions["Label"] <=dv_max - - ## Filter DF using upper and lower bounds - sort values based on absolute distance to desired value provided by user. - proposed_values_to_try = generated_predictions[lower_bound & upper_bound] - proposed_values_to_try['distance_to_desired_value'] = np.abs(generated_predictions['Label'] - desired_value) - proposed_values_to_try.sort_values('distance_to_desired_value', inplace=True) - proposed_values_to_try.reset_index(drop=True, inplace=True) - - ## Display top 10 rows - final_proposed_parameters = proposed_values_to_try[0:10] - - if opt_type == 'Maximize it': - st.subheader("Using the trained model to maximize target value:") - generated_preds = generated_predictions.copy() - - # Sort results in descending order based on predicted values - generated_preds.sort_values('Label', ascending=False, inplace=True) - generated_preds.reset_index(drop=True, inplace=True) - - ## Display top 10 rows - final_proposed_parameters = generated_preds[0:10] - - if opt_type == 'Minimize it': - st.subheader("Using the trained model to minimize target value:") - generated_preds = generated_predictions.copy() - - # Sort results in descending order based on predicted values - generated_preds.sort_values('Label', inplace=True) - generated_preds.reset_index(drop=True, inplace=True) - - ## Display top 10 rows - final_proposed_parameters = generated_preds[0:10] - - if len(final_proposed_parameters) == 0: - st.write("No parameters could be found for the desired value based on current model. Try collecting additional data or provide a different target value.") - else: - st.write(final_proposed_parameters) - st.download_button(label="Download the Proposed Parameters to Try", data = final_proposed_parameters.to_csv(index=False), file_name='Final_proposed_parameters.csv') - - -# ---------------------------------------------------------------------------------------------------------------------- # - # Build Classifier Model - if mod_type == "classifier": - # Setup Classifier Problem - if data_size > 20: - s = clf.setup(train_data, target = target_col, normalize=True, silent= True, fold=5) - else: - s = clf.setup(data, target = target_col, normalize=True, silent= True) - - # Find the best algorithm to build Model: - st.subheader("Algorithm Selection") - start_algo = time.time() - with st.spinner(text="Finding the best algorithm for your dataset..."): - best_mod = clf.compare_models() - classifier_results = clf.pull() - best_mod_name = classifier_results.Model[0] - st.write(classifier_results) - end_algo = time.time() - st.write('Time taken to select algorithm:', end_algo - start_algo, 'seconds') - - # Tune the hyperparameters for the best algorithm: - st.subheader("Tuning the Model") - start_tune = time.time() - with st.spinner(text="Tuning the algorithm..."): - tuned_mod = clf.tune_model(best_mod, optimize = 'AUC', n_iter=5) - end_tune = time.time() - st.write('Time taken to select hyperparameters:', end_tune - start_tune, 'seconds') - - # Finalize the model (Train on the entire train dataset): - with st.spinner("Finalizing the model..."): - final_mod = clf.finalize_model(tuned_mod) - - st.success('Model successfully trained! Here are your results:') - st.write('Best algorithm: ', best_mod_name) - st.write('Best hyperparameters: ', final_mod.get_params()) - - # Print a Feature Importance Plot: - if best_mod_name in tree_mods_list: - st.subheader("Feature Importance Plot") - st.pyplot(clf.plot_model(final_mod, plot='feature')) - - if len(data) > 20: - # Predict on the test set if it was created: - st.subheader("Evaluating model on the test/hold out data:") - predictions = clf.predict_model(final_mod, data=test_data) - st.success('Here are your results:') - st.write(predictions) - st.caption('"Label" is the value predicted by the model.') - st.write('---') - - # Provide Accuracy: - mod_accuracy = accuracy_score(predictions[target_col], predictions['Label']) - st.write('**Model accuracy on test set :**', f'{(mod_accuracy):.2f}') - - # Create a confusion matrix: - st.subheader("Confusion Matrix for test set:") - cm = confusion_matrix(predictions[target_col], predictions['Label']) - disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=predictions[target_col].unique()) - disp.plot() - plt.grid(b=None) - st.pyplot() - -# Visitor Badge -st.markdown("![visitor badge](https://visitor-badge.glitch.me/badge?page_id=singhk28_nocodeml)") \ No newline at end of file diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/networks/sync_batchnorm/batchnorm.py b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/networks/sync_batchnorm/batchnorm.py deleted file mode 100644 index 3e516aa6464211632d89862a6c57c520875c006c..0000000000000000000000000000000000000000 --- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/networks/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,361 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast - -from .comm import SyncMaster -from .replicate import DataParallelWithCallback - -__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', - 'SynchronizedBatchNorm3d', 'convert_model'] - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dementions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True): - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine) - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - return mean, bias_var.clamp(self.eps) ** -0.5 - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm1d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm2d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm3d, self)._check_input_dim(input) - - -def convert_model(module): - """Traverse the input module and its child recursively - and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d - to SynchronizedBatchNorm*N*d - - Args: - module: the input module needs to be convert to SyncBN model - - Examples: - >>> import torch.nn as nn - >>> import torchvision - >>> # m is a standard pytorch model - >>> m = torchvision.models.resnet18(True) - >>> m = nn.DataParallel(m) - >>> # after convert, m is using SyncBN - >>> m = convert_model(m) - """ - if isinstance(module, torch.nn.DataParallel): - mod = module.module - mod = convert_model(mod) - mod = DataParallelWithCallback(mod) - return mod - - mod = module - for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d, - torch.nn.modules.batchnorm.BatchNorm2d, - torch.nn.modules.batchnorm.BatchNorm3d], - [SynchronizedBatchNorm1d, - SynchronizedBatchNorm2d, - SynchronizedBatchNorm3d]): - if isinstance(module, pth_module): - mod = sync_module(module.num_features, module.eps, module.momentum, module.affine) - mod.running_mean = module.running_mean - mod.running_var = module.running_var - if module.affine: - mod.weight.data = module.weight.data.clone().detach() - mod.bias.data = module.bias.data.clone().detach() - - for name, child in module.named_children(): - mod.add_module(name, convert_model(child)) - - return mod diff --git a/spaces/society-ethics/model-card-regulatory-check/tests/cards/jonatasgrosman___wav2vec2-large-xlsr-53-english.md b/spaces/society-ethics/model-card-regulatory-check/tests/cards/jonatasgrosman___wav2vec2-large-xlsr-53-english.md deleted file mode 100644 index fc8b714a2464f9cc546be0aad502f3075a8937ad..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/model-card-regulatory-check/tests/cards/jonatasgrosman___wav2vec2-large-xlsr-53-english.md +++ /dev/null @@ -1,102 +0,0 @@ -# Fine-tuned XLSR-53 large model for speech recognition in English - -Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). -When using this model, make sure that your speech input is sampled at 16kHz. - -This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) - -The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint - -## Usage - -The model can be used directly (without a language model) as follows... - -Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: - -```python -from huggingsound import SpeechRecognitionModel - -model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-english") -audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] - -transcriptions = model.transcribe(audio_paths) -``` - -Writing your own inference script: - -```python -import torch -import librosa -from datasets import load_dataset -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor - -LANG_ID = "en" -MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-english" -SAMPLES = 10 - -test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") - -processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) -model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) - -# Preprocessing the datasets. -# We need to read the audio files as arrays -def speech_file_to_array_fn(batch): - speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) - batch["speech"] = speech_array - batch["sentence"] = batch["sentence"].upper() - return batch - -test_dataset = test_dataset.map(speech_file_to_array_fn) -inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) - -with torch.no_grad(): - logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits - -predicted_ids = torch.argmax(logits, dim=-1) -predicted_sentences = processor.batch_decode(predicted_ids) - -for i, predicted_sentence in enumerate(predicted_sentences): - print("-" * 100) - print("Reference:", test_dataset[i]["sentence"]) - print("Prediction:", predicted_sentence) -``` - -| Reference | Prediction | -| ------------- | ------------- | -| "SHE'LL BE ALL RIGHT." | SHE'LL BE ALL RIGHT | -| SIX | SIX | -| "ALL'S WELL THAT ENDS WELL." | ALL AS WELL THAT ENDS WELL | -| DO YOU MEAN IT? | DO YOU MEAN IT | -| THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS. | THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE BUT STILL CAUSES REGRESSION | -| HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE? | HOW IS MOSLILLAR GOING TO HANDLE ANDBEWOOTH HIS LIKE Q AND Q | -| "I GUESS YOU MUST THINK I'M KINDA BATTY." | RUSTIAN WASTIN PAN ONTE BATTLY | -| NO ONE NEAR THE REMOTE MACHINE YOU COULD RING? | NO ONE NEAR THE REMOTE MACHINE YOU COULD RING | -| SAUCE FOR THE GOOSE IS SAUCE FOR THE GANDER. | SAUCE FOR THE GUICE IS SAUCE FOR THE GONDER | -| GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD. | GRAFS STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD | - -## Evaluation - -1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` - -```bash -python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset mozilla-foundation/common_voice_6_0 --config en --split test -``` - -2. To evaluate on `speech-recognition-community-v2/dev_data` - -```bash -python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0 -``` - -## Citation -If you want to cite this model you can use this: - -```bibtex -@misc{grosman2021xlsr53-large-english, - title={Fine-tuned {XLSR}-53 large model for speech recognition in {E}nglish}, - author={Grosman, Jonatas}, - howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english}}, - year={2021} -} -``` \ No newline at end of file diff --git a/spaces/spignelon/plant_leaf_classifier/README.md b/spaces/spignelon/plant_leaf_classifier/README.md deleted file mode 100644 index bd01cc68b8548a90e00ef5f3218cff1fea130c39..0000000000000000000000000000000000000000 --- a/spaces/spignelon/plant_leaf_classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Minima -emoji: 🏆 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/hf_byte_bpe.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/hf_byte_bpe.py deleted file mode 100644 index c508578d41bf6b7ce0a847e0797d71b19beb393d..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/hf_byte_bpe.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass -from fairseq import file_utils - - -@dataclass -class HuggingFaceByteLevelBPEConfig(FairseqDataclass): - bpe_merges: str = field(default="???", metadata={"help": "path to merges.txt"}) - bpe_vocab: str = field(default="???", metadata={"help": "path to vocab.json"}) - bpe_add_prefix_space: bool = field( - default=False, metadata={"help": "add prefix space before encoding"} - ) - - -@register_bpe("hf_byte_bpe", dataclass=HuggingFaceByteLevelBPEConfig) -class HuggingFaceByteLevelBPE(object): - def __init__(self, cfg): - try: - from tokenizers import ByteLevelBPETokenizer - except ImportError: - raise ImportError( - "Please install huggingface/tokenizers with: " "pip install tokenizers" - ) - - bpe_vocab = file_utils.cached_path(cfg.bpe_vocab) - bpe_merges = file_utils.cached_path(cfg.bpe_merges) - - self.bpe = ByteLevelBPETokenizer( - bpe_vocab, - bpe_merges, - add_prefix_space=cfg.bpe_add_prefix_space, - ) - - def encode(self, x: str) -> str: - return " ".join(map(str, self.bpe.encode(x).ids)) - - def decode(self, x: str) -> str: - return self.bpe.decode( - [int(tok) if tok not in {"", ""} else tok for tok in x.split()] - ) - - def is_beginning_of_word(self, x: str) -> bool: - return self.decode(x).startswith(" ") diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/roberta/enc_dec.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/roberta/enc_dec.py deleted file mode 100644 index e538dee0aa5984b1a3d02ce81117d2046c030593..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/roberta/enc_dec.py +++ /dev/null @@ -1,192 +0,0 @@ -import argparse -import logging - -import torch.nn as nn -import fairseq.checkpoint_utils -from fairseq.models import ( - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import TransformerDecoder -from fairseq.models.roberta import model as roberta - -logger = logging.getLogger(__name__) - - -@register_model("roberta_enc_dec") -class RobertaEncDecModel(FairseqEncoderDecoderModel): - @staticmethod - def add_args(parser): - parser.add_argument( - "--pretrained-mlm-checkpoint", - default=None, - type=str, - metavar="PRETRAINED", - help="path to pretrained mlm checkpoint", - ) - parser.add_argument( - "--pretrained-decoder", action="store_true", help="reload decoder" - ) - parser.add_argument( - "--hack-layernorm-embedding", - action="store_true", - help="hack to reload old models trained with encoder-normalize-before=False (no equivalent to encoder-normalize-before=False and layernorm_embedding=False", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present - base_enc_dec_architecture(args) - if args.pretrained_mlm_checkpoint: - arg_overrides = None - if args.hack_layernorm_embedding: - arg_overrides = {"layernorm_embedding": False} - loaded = fairseq.checkpoint_utils.load_model_ensemble_and_task( - [args.pretrained_mlm_checkpoint], arg_overrides=arg_overrides - ) - ([roberta_enc], _cfg, _task) = loaded - else: - # Do we need to edit untie_weights here ? - share_in_out = ( - args.share_decoder_input_output_embed or args.share_all_embeddings - ) - args.untie_weights_roberta = not share_in_out - if args.hack_layernorm_embedding: - args.layernorm_embedding = False - args.encoder_normalize_before = False - roberta_enc = roberta.RobertaModel.build_model(args, task) - - return cls.from_roberta(roberta_enc, args, task.source_dictionary) - - @staticmethod - def from_roberta(roberta_enc: roberta.RobertaModel, args, dictionary): - encoder = roberta_enc.encoder.sentence_encoder - vocab_size, embed_dim = encoder.embed_tokens.weight.shape - - if args.share_all_embeddings: - lm_head = roberta_enc.encoder.lm_head - assert encoder.embed_tokens.weight is lm_head.weight, ( - "Can't use --share-all-embeddings with a model " - "that was pretraiend with --untie-weights-roberta_enc" - ) - else: - lm_head = roberta.RobertaLMHead( - embed_dim, vocab_size, roberta_enc.args.activation_fn - ) - - dec_embs = nn.Embedding(vocab_size, embed_dim, dictionary.pad()) - if args.share_all_embeddings or args.share_decoder_input_output_embed: - # Note: I wasn't able to use Embedding _weight parameter to achive this sharing. - dec_embs.weight = lm_head.weight - - decoder = TransformerDecoder( - RobertaEncDecModel.read_args_from_roberta(roberta_enc.args), - dictionary, - dec_embs, - no_encoder_attn=False, - output_projection=lm_head, - ) - if getattr(args, "pretrained_decoder", False): - decoder_dict = encoder.state_dict() - - # TODO: hide setting "encoder_attn" layers behind a flag. - for k, w in list(decoder_dict.items()): - if ".self_attn" in k: - k_enc_attn = k.replace(".self_attn", ".encoder_attn") - decoder_dict[k_enc_attn] = w.detach().clone() - - for k, w in lm_head.state_dict().items(): - decoder_dict["output_projection." + k] = w - - missing_keys, unexpected_keys = decoder.load_state_dict( - decoder_dict, strict=False - ) - # missing_keys = [m for m in missing_keys if ".encoder_attn" not in m] - assert not missing_keys and not unexpected_keys, ( - "Failed to load state dict. " - f"Missing keys: {missing_keys}. " - f"Unexpected keys: {unexpected_keys}." - ) - - if args.share_all_embeddings: - assert decoder.output_projection.weight is decoder.embed_tokens.weight - assert encoder.embed_tokens.weight is decoder.embed_tokens.weight - elif args.share_decoder_input_output_embed: - assert decoder.output_projection.weight is decoder.embed_tokens.weight - assert encoder.embed_tokens.weight is not decoder.embed_tokens.weight - else: - assert decoder.output_projection.weight is not decoder.embed_tokens.weight - assert encoder.embed_tokens.weight is not decoder.embed_tokens.weight - - return RobertaEncDecModel(encoder, decoder) - - @staticmethod - def read_args_from_roberta(roberta_args: argparse.Namespace): - # TODO: this would become easier if encoder/decoder where using a similar - # TransformerConfig object - args = argparse.Namespace(**vars(roberta_args)) - attr_map = [ - ("encoder_attention_heads", "decoder_attention_heads"), - ("encoder_embed_dim", "decoder_embed_dim"), - ("encoder_embed_dim", "decoder_output_dim"), - ("encoder_normalize_before", "decoder_normalize_before"), - ("encoder_layers_to_keep", "decoder_layers_to_keep"), - ("encoder_ffn_embed_dim", "decoder_ffn_embed_dim"), - ("encoder_layerdrop", "decoder_layerdrop"), - ("encoder_layers", "decoder_layers"), - ("encoder_learned_pos", "decoder_learned_pos"), - # should this be set from here ? - ("max_positions", "max_target_positions"), - ] - for k1, k2 in attr_map: - setattr(args, k2, getattr(roberta_args, k1)) - - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = not roberta_args.untie_weights_roberta - return args - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - super().upgrade_state_dict_named(state_dict, name) - old_keys = list(state_dict.keys()) - - # rename decoder -> encoder before upgrading children modules - for k in old_keys: - if k.startswith(prefix + "encoder.lm_head"): - state_dict.pop(k) - continue - new_k = k - new_k = new_k.replace(".sentence_encoder.", ".") - new_k = new_k.replace("decoder.lm_head.", "decoder.output_projection.") - if k == new_k: - continue - # print(k, "->", new_k) - state_dict[new_k] = state_dict.pop(k) - - -@register_model_architecture("roberta_enc_dec", "roberta_enc_dec") -def base_enc_dec_architecture(args): - args.hack_layernorm_embedding = getattr(args, "hack_layernorm_embedding", False) - args.pretrained_mlm_checkpoint = getattr(args, "pretrained_mlm_checkpoint", None) - args.pretrained_decoder = getattr(args, "pretrained_decoder", None) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - - roberta.base_architecture(args) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Arcgis Engine Developer Kit V.10.1 Download.md b/spaces/stomexserde/gpt4-ui/Examples/Arcgis Engine Developer Kit V.10.1 Download.md deleted file mode 100644 index 47dcf2138d30e9f6a78a06b07942da4fc2582009..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Arcgis Engine Developer Kit V.10.1 Download.md +++ /dev/null @@ -1,49 +0,0 @@ -
          -```html -

          How to Download and Install ArcGIS Engine Developer Kit v.10.1

          -

          ArcGIS Engine Developer Kit v.10.1 is a software package that allows you to create custom GIS and mapping applications using ArcObjects libraries for .NET, Java, or C++. In this article, we will show you how to download and install ArcGIS Engine Developer Kit v.10.1 and its components on Windows.

          -

          Prerequisites

          -

          Before you start, make sure you have the following:

          -

          arcgis engine developer kit v.10.1 download


          Download File ✵✵✵ https://urlgoal.com/2uIaYN



          -
            -
          • A valid Esri account and software authorization numbers for ArcGIS Engine Developer Kit v.10.1 and ArcGIS Engine v.10.1.
          • -
          • The Microsoft .NET Framework 3.5 SP1 or later installed on your machine.
          • -
          • Administrative privileges for installation.
          • -
          • Write access to your download directory location and your ArcGIS installation directory location.
          • -
          -

          Download

          -

          To download ArcGIS Engine Developer Kit v.10.1 and its components, follow these steps:

          -
            -
          1. Log in to the Esri Customer Care Portal with your Esri account credentials.
          2. -
          3. Go to the Downloads section and select ArcGIS Engine Developer Kit v.10.1 from the list of products.
          4. -
          5. Click on the Download button next to the component you want to download. You can choose from ArcGIS Engine, ArcGIS Engine Background Geoprocessing (64 bit), Database Server (Desktop), ArcGIS License Manager 2019.0, ArcGIS Coordinate Systems Data, DBMS Support Files, and ArcObjects SDKs for .NET, Java, or C++.
          6. -
          7. Use the Esri Download Manager to manage your downloads. You can select and download multiple components, but only install one component at a time when the downloads are complete.
          8. -
          -

          Extract and Install

          -

          To extract and install ArcGIS Engine Developer Kit v.10.1 and its components, follow these steps:

          -
            -
          1. Run the downloaded self-extracting .exe file for each component.
          2. -
          3. Choose a unique location to extract the installation files.
          4. -
          5. Click OK to start the extraction process.
          6. -
          7. When the extraction is complete, leave the Launch the setup program check box checked to automatically install the component, or uncheck it to install it later.
          8. -
          9. Follow the prompts of the setup program to complete the installation for each component.
          10. -
          -

          Authorize

          -

          To authorize ArcGIS Engine Developer Kit v.10.1 and ArcGIS Engine v.10.1, follow these steps:

          -

          -
            -
          1. On completion of the ArcGIS Engine Developer Kit v.10.1 installation, click Authorize to launch the Software Authorization Wizard.
          2. -
          3. Select I have installed my software and need to authorize it and click Next.
          4. -
          5. Select Authorize with Esri now using the Internet and click Next.
          6. -
          7. Enter your software authorization numbers for ArcGIS Engine Developer Kit v.10.1 and ArcGIS Engine v.10.1 and click Next.
          8. -
          9. Select Single Use or Concurrent Use depending on your license type and click Next.
          10. -
          11. If you selected Concurrent Use, enter the name of your license manager server and click Next.
          12. -
          13. Select the extensions you want to authorize and click Next.
          14. -
          15. Review your authorization information and click Finish.
          16. -
          - -

          Congratulations! You have successfully downloaded and installed ArcGIS Engine Developer Kit v.10.1 and its components on your machine. You can now start developing custom GIS and mapping applications using ArcObjects libraries for .NET, Java, or C++.

          - -```

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Beersmith 2 Version 2.1.02 Activation Key.md b/spaces/stomexserde/gpt4-ui/Examples/Beersmith 2 Version 2.1.02 Activation Key.md deleted file mode 100644 index 987e97e4d388bc2787c329acb92fa9ded093622d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Beersmith 2 Version 2.1.02 Activation Key.md +++ /dev/null @@ -1,98 +0,0 @@ -
          -

          How to Activate Beersmith 2 Version 2.1.02 with a Valid Key

          -

          If you are a homebrewer who wants to create your own recipes and brew logs, you might have heard of Beersmith 2, a software that helps you design and manage your brewing process. But what is Beersmith 2 exactly, and how can you activate it with a valid key? In this article, we will answer these questions and show you how to install, use, and troubleshoot Beersmith 2 on your computer.

          -

          What is Beersmith 2 and what are its features and benefits

          -

          Beersmith 2 is a powerful and easy-to-use tool that can help you brew better beer at home. It allows you to:

          -

          beersmith 2 version 2.1.02 activation key


          Download Zip >>> https://urlgoal.com/2uIbKH



          -
            -
          • Choose from thousands of beer recipes from various styles and sources, or create your own from scratch.
          • -
          • Adjust the ingredients, equipment, and methods according to your preferences and needs.
          • -
          • Calculate the color, bitterness, gravity, alcohol, and other parameters of your beer automatically.
          • -
          • Follow the step-by-step instructions for each stage of the brewing process, from mashing to bottling.
          • -
          • Record your measurements, notes, and results for each batch of beer.
          • -
          • Organize your inventory, shopping list, and schedule for your brewing activities.
          • -
          • Share your recipes and feedback with other Beersmith users online.
          • -
          -

          By using Beersmith 2, you can save time, money, and effort in making your own beer. You can also experiment with different ingredients, techniques, and styles, and learn more about the science and art of brewing.

          -

          Why do you need an activation key and how to get one

          -

          To use Beersmith 2 on your computer, you need an activation key that verifies your purchase of the software. The activation key is a unique code that you enter when you install or run Beersmith 2 for the first time. The activation key allows you to use Beersmith 2 on up to two devices at a time.

          -

          To get an activation key, you need to buy Beersmith 2 from the official website or other authorized sellers. The price of Beersmith 2 is $27.95 for a single license, which includes free updates for life. You can also get a discounted price if you buy multiple licenses or bundle packages.

          -

          After you buy Beersmith 2, you will receive an email with your activation key and a download link for the software. You can also find your activation key in your online account on the Beersmith website. You should keep your activation key in a safe place, as you will need it if you want to reinstall or transfer Beersmith 2 to another device.

          -

          How to install Beersmith 2 on your computer

          -

          Installing Beersmith 2 on your computer is a simple and quick process. Here are the steps you need to follow:

          -
            -
          1. Download the software from the email or the website that you received after buying Beersmith 2. You can choose the version that matches your operating system, whether it is Windows, Mac, or Linux.
          2. -
          3. Run the setup file and follow the instructions on the screen. You can customize the installation options, such as the destination folder and the shortcuts.
          4. -
          5. Enter your activation key when prompted. You can copy and paste it from your email or your online account. Make sure you enter it correctly and without any spaces.
          6. -
          7. Finish the installation and launch Beersmith 2. You can start using the software and explore its features and functions.
          8. -
          -

          If you have any problems with the installation or the activation, you can contact the Beersmith support team or check the online forums for help.

          -

          How to use Beersmith 2 to brew your own beer at home

          -

          Once you have installed and activated Beersmith 2 on your computer, you can start using it to brew your own beer at home. Here are some of the basic steps you need to follow:

          -

          Choose a recipe from the database or create your own

          -

          Beersmith 2 comes with a large database of beer recipes from various styles and sources. You can browse through them by category, name, rating, or author. You can also search for a specific recipe by keyword or filter. You can view the details of each recipe, such as the ingredients, methods, and parameters. You can also see the comments and ratings from other users who have tried the recipe.

          -

          -

          If you want to create your own recipe, you can use the recipe editor in Beersmith 2. You can start from scratch or use an existing recipe as a template. You can add, edit, or delete any ingredient or method in your recipe. You can also name and describe your recipe, and add any notes or pictures.

          -

          Adjust the parameters according to your equipment and preferences

          -

          Beersmith 2 allows you to adjust the parameters of your recipe according to your equipment and preferences. You can change the batch size, boil time, efficiency, water profile, carbonation level, and other factors that affect your beer. You can also use the built-in calculators and tools in Beersmith 2 to help you with these adjustments. For example, you can use the mash calculator to determine the water temperature and volume for your mash, or the yeast starter calculator to estimate how much yeast you need for your fermentation.

          -

          Beersmith 2 also lets you customize your equipment profile in the software. You can enter the details of your brewing equipment, such as the kettle size, mash tun type, fermenter volume, and loss factors. This way, Beersmith 2 can scale your recipes to fit your equipment and give you more accurate results.

          -

          Follow the steps of the brewing process and record your data

          -

          Beersmith 2 provides you with a step-by-step guide for each stage of the brewing process, from mashing to bottling. You can follow these instructions on your computer screen or print them out for convenience. You can also use the timers and alarms in Beersmith 2 to remind you of the actions you need to take during the process.

          -

          As you brew your beer, you can record your data in Beersmith 2. You can enter your measurements, such as gravity, temperature, pH, volume, color, and bitterness. You can also enter any notes or observations that you have during the process. Beersmith 2 will save these data in your brew log for future reference.

          -

          How to troubleshoot common problems with Beersmith 2

          -

          Beersmith 2 is a reliable and stable software that works well on most computers. However, sometimes you might encounter some problems with it. Here are some of the common problems that users face with Beersmith 2 and how to solve them:

          -

          What to do if you lose your activation key or need to transfer it to another device

          -

          If you lose your activation key or need to transfer it to another device, you can retrieve it from your online account on the Beersmith website. You can log in with your email and password that you used when you bought the software. You can also request a password reset if you forgot your password. Once you log in, you can see your activation key and your order history. You can copy and paste your activation key to activate Beersmith 2 on another device. You can also deactivate your activation key from a device if you want to free up a slot for another device.

          -

          How to update the software and get technical support

          -

          Beersmith 2 offers free updates for life for its users. You can check for updates from the software itself or from the website. You can download and install the latest version of Beersmith 2 without losing your data or settings. You should always update your software to get the latest features, bug fixes, and improvements.

          -

          If you have any questions or issues with Beersmith 2, you can get technical support from the Beersmith team or the community. You can contact the Beersmith support team by email or phone, or submit a ticket on the website. You can also check the online forums, blogs, podcasts, and videos for helpful tips and tutorials. You can also join the Beersmith Facebook group and Twitter account to interact with other users and get news and updates.

          -

          How to avoid errors and bugs in the software

          -

          Beersmith 2 is a well-tested and reliable software that rarely has errors or bugs. However, sometimes you might encounter some glitches or problems with the software. Here are some of the ways to avoid or fix them:

          -
            -
          • Make sure your computer meets the minimum system requirements for Beersmith 2. You need at least Windows XP, Mac OS X 10.5, or Linux Ubuntu 10.04, as well as 1 GB of RAM and 100 MB of disk space.
          • -
          • Make sure your internet connection is stable and secure when you download, install, update, or activate Beersmith 2. You might experience some delays or failures if your connection is slow or interrupted.
          • -
          • Make sure you enter your activation key correctly and without any spaces when you install or run Beersmith 2. You might get an invalid key error if you enter it wrong.
          • -
          • Make sure you backup your data regularly in case of any data loss or corruption. You can use the backup and restore functions in Beersmith 2 to save and load your data files.
          • -
          • Make sure you report any errors or bugs that you find in Beersmith 2 to the Beersmith support team or the online forums. This way, they can fix them in future updates and improve the software quality.
          • -
          -

          Conclusion

          -

          Beersmith 2 is a great software for homebrewers who want to create their own recipes and brew logs. It has many features and benefits that can help you brew better beer at home. To use Beersmith 2, you need an activation key that verifies your purchase of the software. You can buy Beersmith 2 from the official website or other authorized sellers, and receive an email with your activation key and a download link for the software. You can then install, activate, and use Beersmith 2 on your computer.

          -

          In this article, we have shown you how to install, use, and troubleshoot Beersmith 2 on your computer. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to contact us or leave a comment below. Happy brewing!

          -

          FAQs

          -

          What are the system requirements for Beersmith 2?

          -

          The system requirements for Beersmith 2 are:

          -
            -
          • Windows XP/Vista/7/8/10 (32-bit or 64-bit)
          • -
          • Mac OS X 10.5 or higher (Intel only)
          • -
          • Linux Ubuntu 10.04 or higher (32-bit or 64-bit)
          • -
          • 1 GB of RAM
          • -
          • 100 MB of disk space
          • -
          • An internet connection for downloading, installing, updating, and activating the software
          • -
          -

          How much does Beersmith 2 cost and how long does the activation key last?

          -

          The price of Beersmith 2 is $27.95 for a single license, which includes free updates for life. You can also get a discounted price if you buy multiple licenses or bundle packages.

          -

          The activation key lasts for as long as you use Beersmith 2 on your computer. You can use it on up to two devices at a time. You can also transfer it to another device if you want to, as long as you deactivate it from the previous device first.

          -

          What are some alternatives and competitors to Beersmith 2?

          -

          Some of the alternatives and competitors to Beersmith 2 are:

          -
            -
          • Brewfather: A web-based software that works on any device and syncs with cloud storage. It has similar features and functions as Beersmith 2, but also integrates with various smart devices and sensors. It costs $19.99 per year or $1.99 per month.
          • -
          • Brewtarget: A free and open-source software that runs on Windows, Mac, and Linux. It has a simple and user-friendly interface, but also offers advanced options and calculations. It supports multiple languages and units.
          • -
          • Brewer's Friend: A web-based software that also has mobile apps for iOS and Android. It has a large database of recipes and ingredients, as well as tools and calculators for brewing. It costs $24.99 per year or $4.99 per month.
          • -
          -

          How can I share my recipes and feedback with other Beersmith users?

          -

          You can share your recipes and feedback with other Beersmith users in several ways:

          -
            -
          • You can use the cloud service in Beersmith 2 to upload and download your recipes online. You can also rate and comment on other users' recipes.
          • -
          • You can use the export and import functions in Beersmith 2 to save and load your recipes in various formats, such as XML, HTML, TXT, or BeerXML. You can then share these files via email, social media, or other platforms.
          • -
          • You can join the online forums, blogs, podcasts, and videos on the Beersmith website or other related websites. You can also join the Beersmith Facebook group and Twitter account to interact with other users and get news and updates.
          • -
          -

          How can I learn more about beer brewing and improve my skills?

          -

          You can learn more about beer brewing and improve your skills by:

          -
            -
          • Reading books, magazines, articles, and blogs about beer brewing. Some of the recommended books are How to Brew by John Palmer, The Complete Joy of Homebrewing by Charlie Papazian, and Designing Great Beers by Ray Daniels.
          • -
          • Watching videos, podcasts, webinars, and courses about beer brewing. Some of the recommended sources are Basic Brewing Video, The Brewing Network, Brew Your Own Magazine, and Craft Beer & Brewing Magazine.
          • -
          • Attending events, workshops, seminars, and festivals about beer brewing. Some of the recommended events are Homebrew Con, National Homebrew Competition, Great American Beer Festival, and World Beer Cup.
          • -
          • Joining clubs, groups, associations, and communities about beer brewing. Some of the recommended organizations are American Homebrewers Association, Brewers Association, Beer Judge Certification Program, and local homebrew clubs.
          • -

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bluesoleil 10 Serial Number Crack Extra Quality Full Version Free Download.md b/spaces/stomexserde/gpt4-ui/Examples/Bluesoleil 10 Serial Number Crack Extra Quality Full Version Free Download.md deleted file mode 100644 index 98719c1508ed68b470e4398f20444c54d02171b8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Bluesoleil 10 Serial Number Crack Extra Quality Full Version Free Download.md +++ /dev/null @@ -1,33 +0,0 @@ - -

          Bluesoleil 10 Serial Number Crack Full Version Free Download

          -

          Bluesoleil 10 is a software that allows you to connect your Bluetooth devices to your computer. You can use it to transfer files, sync contacts, listen to music, make phone calls, and more. But what if you don't have a license key to activate it? Don't worry, you can use a crack to bypass the activation process and enjoy the full version of Bluesoleil 10 for free.

          -

          In this article, we will show you how to download and install Bluesoleil 10 serial number crack full version free. You will also learn about the features and benefits of using Bluesoleil 10 on your PC.

          -

          Bluesoleil 10 Serial Number Crack Full Version Free Download


          Download ☆☆☆ https://urlgoal.com/2uIaKu



          -

          How to Download and Install Bluesoleil 10 Serial Number Crack Full Version Free

          -

          Follow these steps to download and install Bluesoleil 10 serial number crack full version free:

          -
            -
          1. Go to the official website of Bluesoleil 10 and download the trial version of the software. You can choose between the 32-bit or 64-bit version depending on your system.
          2. -
          3. Run the setup file and follow the instructions to install Bluesoleil 10 on your computer. You will need to restart your PC after the installation is complete.
          4. -
          5. Download the crack file from a reliable source. You can search for it on Google or use the link provided below.
          6. -
          7. Extract the crack file using WinRAR or any other file extractor. You will find a folder named "Crack" with two files: "BlueSoleil.exe" and "SerialNumber.dll".
          8. -
          9. Copy and paste these two files into the installation directory of Bluesoleil 10. You can find it by right-clicking on the Bluesoleil 10 shortcut on your desktop and choosing "Open file location".
          10. -
          11. Replace the original files when prompted. This will overwrite the trial version of Bluesoleil 10 with the cracked version.
          12. -
          13. Launch Bluesoleil 10 from your desktop or start menu. You will see that it is activated and ready to use.
          14. -
          -

          Congratulations! You have successfully downloaded and installed Bluesoleil 10 serial number crack full version free. Now you can enjoy all the features and benefits of this software without paying anything.

          -

          Features and Benefits of Using Bluesoleil 10

          -

          Bluesoleil 10 is a powerful and versatile software that allows you to connect your Bluetooth devices to your computer. Here are some of the features and benefits of using Bluesoleil 10:

          -
            -
          • You can connect up to 17 Bluetooth devices at the same time, such as keyboards, mice, headphones, speakers, printers, scanners, cameras, phones, tablets, etc.
          • -
          • You can easily manage your Bluetooth connections with a user-friendly interface. You can view the status, signal strength, battery level, and other information of your connected devices.
          • -
          • You can transfer files between your computer and your Bluetooth devices with high speed and security. You can also sync contacts, messages, calendars, photos, music, videos, and more.
          • -
          • You can use your Bluetooth headset or speaker to listen to music or make phone calls from your computer. You can also use your Bluetooth phone as a remote control for your computer.
          • -
          • You can share your internet connection with your Bluetooth devices via a personal hotspot. You can also access other Bluetooth networks or devices nearby.
          • -
          • You can customize your Bluetooth settings according to your preferences. You can change the name, icon, theme, language, sound effects, etc. of Bluesoleil 10.
          • -
          -

          Bluesoleil 10 is a software that you don't want to miss if you have Bluetooth devices. It will make your life easier and more enjoyable by connecting your devices wirelessly.

          -

          Conclusion

          -

          Bluesoleil 10 is a software that allows you to connect your Bluetooth devices to your computer. It has many features and benefits that will enhance your experience with Bluetooth technology. However, it requires a license key to activate it which can be expensive for some users. That's why we have shown you how to download and install Bluesoleil 10 serial number crack full version free in this article. By using a crack, you can bypass the activation process and use Bluesoleil 10 without any

          -

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Dream Zindagi 2 Full Movie In Hindi Hd 720p.md b/spaces/stomexserde/gpt4-ui/Examples/Download Dream Zindagi 2 Full Movie In Hindi Hd 720p.md deleted file mode 100644 index fb1486748f581797379fa9978090361a96f63c07..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Dream Zindagi 2 Full Movie In Hindi Hd 720p.md +++ /dev/null @@ -1,16 +0,0 @@ - -

          Download Dream Zindagi 2 Full Movie In Hindi Hd 720p

          -

          Dream Zindagi 2 is the sequel to the 2017 Bollywood action thriller movie Dream Zindagi, starring Nishant Kumar, Shubhra Ghosh, Aliya Khan and Shivani Kundanani. The movie follows the lives of four friends who are struggling to achieve their dreams in the city of Mumbai. They face various challenges and obstacles in their personal and professional lives, but they never give up on their hopes and aspirations.

          -

          Download Dream Zindagi 2 Full Movie In Hindi Hd 720p


          Download ····· https://urlgoal.com/2uI7JD



          -

          The movie is directed by Rakesh Kharvi and produced by Mahesh Rana. The music is composed by Mahesh Matkar and the lyrics are written by Tarkesh Mishra and Rakesh Kharvi. The movie has been praised for its realistic portrayal of the urban youth and their struggles. The movie also has some thrilling action sequences and romantic moments that keep the audience engaged.

          -

          If you are a fan of Bollywood movies and want to watch Dream Zindagi 2 in high quality, you can download it from our website. We provide you with the best and fastest download links for Dream Zindagi 2 full movie in Hindi HD 720p. You can enjoy the movie on your laptop, desktop, mobile or tablet without any hassle. You can also watch the trailer and read the reviews of the movie on our website.

          -

          So what are you waiting for? Download Dream Zindagi 2 full movie in Hindi HD 720p from our website and enjoy this amazing movie with your friends and family. You will not regret it!

          -

          - -

          Dream Zindagi 2 is a movie that will inspire you to chase your dreams and overcome your challenges. The movie shows how the four friends support each other and face their enemies with courage and determination. The movie also has a message of friendship, love and loyalty that will touch your heart.

          -

          The movie has some amazing performances by the lead actors and actresses. Nishant Kumar plays the role of Raj, a aspiring singer who wants to make it big in the music industry. Shubhra Ghosh plays the role of Priya, a talented dancer who dreams of becoming a choreographer. Aliya Khan plays the role of Zara, a smart and ambitious journalist who wants to expose the truth behind the corruption in the city. Shivani Kundanani plays the role of Anjali, a sweet and bubbly girl who wants to find her true love.

          -

          The movie also has some powerful villains who try to stop the four friends from achieving their dreams. Sajjad Delafrooz plays the role of Abu Usman, a ruthless terrorist who kidnaps a group of Indian and Pakistani nurses in Iraq. Paresh Rawal plays the role of Theuban, a corrupt politician who is involved in illegal activities. Nawab Shah plays the role of Pawan, a gangster who is after Raj for his money. Suraj Rana plays the role of Tiger's friend, a secret agent who helps Raj and his friends in their mission.

          -

          Dream Zindagi 2 is a movie that will keep you on the edge of your seat with its thrilling plot and action scenes. The movie also has some beautiful songs and dances that will make you groove along. The movie is a perfect blend of entertainment and emotion that will make you laugh, cry and cheer for the four friends.

          -

          Don't miss this opportunity to watch Dream Zindagi 2 full movie in Hindi HD 720p on our website. You can download it easily and quickly from our secure and reliable links. You can also share your feedback and comments on our website after watching the movie. We would love to hear from you!

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ArchiCAD 16 FR.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ArchiCAD 16 FR.md deleted file mode 100644 index 95bb6e16d4ecccd8a16c1e6abb4aa49bbd4026db..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ArchiCAD 16 FR.md +++ /dev/null @@ -1,24 +0,0 @@ -
          -

          How to Download and Install Archicad 16 FR

          -

          Archicad 16 FR is a version of Archicad, a BIM software for architects, that supports the French language. Archicad 16 FR allows you to design, visualize, document, and deliver projects of all sizes with its powerful set of built-in tools and easy-to-use interface. In this article, we will show you how to download and install Archicad 16 FR on your computer.

          -

          ArchiCAD 16 FR


          Download Filehttps://cinurl.com/2uEYCg



          -

          Step 1: Download Archicad 16 FR

          -

          To download Archicad 16 FR, you need to visit the official website of Graphisoft, the developer of Archicad. Graphisoft offers various downloads for Archicad, such as license activation, IFC, MEP, add-ons, goodies, libraries, and other extensions. You can find them on this page[^1^].

          -

          On the same page, you can also find the downloads for BIMcloud, a cloud-based collaboration platform for Archicad users, and BIMx, an interactive BIM communication tool for mobile devices and desktop computers. You can download them if you need them for your projects.

          -

          To download Archicad 16 FR specifically, you need to scroll down to the section "Downloads for STAR (T) Edition". STAR (T) Edition is a special version of Archicad that is available for free for students, teachers, and schools. You can download it if you are eligible for it.

          -

          -

          In the section "Downloads for STAR (T) Edition", you will see a list of languages that are supported by Archicad 16. Click on the link that says "French" to download Archicad 16 FR. You will be redirected to another page where you can choose between the 32-bit or 64-bit version of Archicad 16 FR. Choose the one that matches your system requirements and click on the download button.

          -

          Step 2: Install Archicad 16 FR

          -

          Once you have downloaded Archicad 16 FR, you need to install it on your computer. To do that, follow these steps:

          -
            -
          • Locate the downloaded file on your computer. It should be a ZIP file with a name like "AC16-FRA-3006-1.0.zip".
          • -
          • Extract the ZIP file to a folder on your computer. You can use any software that can handle ZIP files, such as WinRAR or 7-Zip.
          • -
          • Open the extracted folder and double-click on the file named "Setup.exe". This will launch the installation wizard of Archicad 16 FR.
          • -
          • Follow the instructions on the screen to complete the installation process. You will need to accept the license agreement, choose the installation folder, select the components to install, and enter your serial number if you have one.
          • -
          • When the installation is finished, click on "Finish" to exit the wizard.
          • -
          -

          Step 3: Enjoy Archicad 16 FR

          -

          Congratulations! You have successfully downloaded and installed Archicad 16 FR on your computer. You can now start using it to create amazing architectural projects in French.

          -

          If you need more help or guidance on how to use Archicad 16 FR, you can watch this video[^2^] that shows you how to download and install Archicad 16 FR step by step. You can also visit the official website of Graphisoft[^3^] to learn more about Archicad and its features.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG.md deleted file mode 100644 index 6898a479cab189f4b368405a7b7c493c811afe6b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG.md +++ /dev/null @@ -1,38 +0,0 @@ -

          Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG


          DOWNLOADhttps://cinurl.com/2uEXqk



          - -Flash Player is out of date, or may not work at all. The client requested to launch a HTTP tunnel and immediately closed the connection. 主要なJavaScriptエラーが発生しました: メモリリーク。 詳細については、「プロセス・ヘルプ」を参照してください。 Run update. [STP-963] Compiler-Error when upgrading to C++17: CFGH-4346Q: - -How to find all the subsequences in a sequence - -Say I have a list - -list = [1,2,3,4,5,6] - -Is there a function which will return all the subsequences in the list, such that I can iterate on the output? - -Result: - -[[1], [2], [1, 2], [2, 3], [1, 2, 3], [3, 4], [1, 2, 3, 4], [4, 5], [1, 2, 3, 4, 5], [5, 6]] - -A: - -You can use itertools.combinations: - -import itertools - -list(itertools.combinations(list, 3)) - -# [(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 2, 6), (1, 3, 4), (1, 3, 5), (1, 3, 6), (1, 4, 5), (1, 4, 6), (1, 5, 6)] - -That should give you all possible 3-subsequence combinations of [1, 2, 3, 4, 5, 6]. To get the list of subsequences, just slice it: - ->>> list(itertools.combinations(list, 3))[1:] - -[(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 2, 6), (1, 3, 4), (1, 3, 5), (1, 3, 6), (1, 4, 5), (1, 4, 6), (1, 5, 6)] - -And if you want to get all 6-subsequence combinations: - ->>> list(itertools.combinations( 4fefd39f24
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Libro Teorias Del Aprendizaje Para Maestros Pdf.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Libro Teorias Del Aprendizaje Para Maestros Pdf.md deleted file mode 100644 index 6aef912f2d7a07a8279c1a3d0e0a7051779f42bf..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Libro Teorias Del Aprendizaje Para Maestros Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Libro Teorias Del Aprendizaje Para Maestros Pdf


          DOWNLOAD »»» https://cinurl.com/2uEYGd



          - -El estudio del aprendizaje ha sido de gran importancia para los psicólogos de las diferentes ... Todas estas teorías acerca del aprendizaje se complementan, ninguna de ellas ... -BIGGE, M.: Teorías del aprendizaje para maestros. Trillas. 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Ashes Cricket 2009 Pc Game Crack Free 11 [HOT].md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Ashes Cricket 2009 Pc Game Crack Free 11 [HOT].md deleted file mode 100644 index 5dc4cb44dc08306f78e44cb0ab2e8c31156ce87d..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Ashes Cricket 2009 Pc Game Crack Free 11 [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

          Ashes Cricket 2009 Pc Game Crack Free 11


          Download Filehttps://urluss.com/2uCFAz



          - -http://easyxlead.com/download.php?file=64 Ashes Cricket 2009 crack no cd ... download free games no ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/sxunwashere/rvc-voice/infer_pack/commons.py b/spaces/sxunwashere/rvc-voice/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/sxunwashere/rvc-voice/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/t13718236382/web-ui/_next/static/chunks/app/page-0620aca274ab75da.js b/spaces/t13718236382/web-ui/_next/static/chunks/app/page-0620aca274ab75da.js deleted file mode 100644 index 72ba5e6bb92856af861440bbb8433c14a048c270..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/_next/static/chunks/app/page-0620aca274ab75da.js +++ /dev/null @@ -1,9 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[931],{909:function(e,t,n){Promise.resolve().then(n.bind(n,17433))},25952:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return o}});let r=n(26927);n(86006);let l=r._(n(6353));function u(e){return{default:(null==e?void 0:e.default)||e}}function o(e,t){let n=l.default,r={loading:e=>{let{error:t,isLoading:n,pastDelay:r}=e;return null}};"function"==typeof e&&(r.loader=e),Object.assign(r,t);let o=r.loader;return n({...r,loader:()=>null!=o?o().then(u):Promise.resolve(u(()=>null))})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90761:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var n in t)Object.defineProperty(e,n,{enumerable:!0,get:t[n]})}(t,{suspense:function(){return l},NoSSR:function(){return u}}),n(26927),n(86006);let r=n(98687);function l(){let e=Error(r.NEXT_DYNAMIC_NO_SSR_CODE);throw e.digest=r.NEXT_DYNAMIC_NO_SSR_CODE,e}function u(e){let{children:t}=e;return t}},6353:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return o}});let r=n(26927),l=r._(n(86006)),u=n(90761),o=function(e){let t=Object.assign({loader:null,loading:null,ssr:!0},e);function n(e){let n=t.loading,r=l.default.createElement(n,{isLoading:!0,pastDelay:!0,error:null}),o=t.ssr?l.default.Fragment:u.NoSSR,a=t.lazy;return l.default.createElement(l.default.Suspense,{fallback:r},l.default.createElement(o,null,l.default.createElement(a,e)))}return t.lazy=l.default.lazy(t.loader),n.displayName="LoadableComponent",n}},17433:function(e,t,n){"use strict";n.r(t),n.d(t,{default:function(){return f}});var r=n(9268),l=n(25952),u=n.n(l),o=n(31405);n(80293);let a=u()(()=>Promise.all([n.e(121),n.e(130),n.e(680),n.e(372),n.e(173),n.e(780),n.e(642)]).then(n.bind(n,77592)),{loadableGenerated:{webpack:()=>[77592]},ssr:!1});function f(){let e=(0,o.Dt)();return(0,r.jsx)("html",{lang:"en",className:e,children:(0,r.jsx)("body",{children:(0,r.jsx)(a,{})})})}},31405:function(e,t,n){"use strict";var r,l;function u(){return localStorage.getItem("themeMode")||r.Light}function o(e){localStorage.setItem("themeMode",e)}n.d(t,{Dt:function(){return u},hY:function(){return r},pQ:function(){return o}}),(l=r||(r={})).Light="light",l.Dark="dark"},80293:function(){},83177:function(e,t,n){"use strict";/** - * @license React - * react-jsx-runtime.production.min.js - * - * Copyright (c) Meta Platforms, Inc. and affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var r=n(86006),l=Symbol.for("react.element"),u=Symbol.for("react.fragment"),o=Object.prototype.hasOwnProperty,a=r.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactCurrentOwner,f={key:!0,ref:!0,__self:!0,__source:!0};function i(e,t,n){var r,u={},i=null,c=null;for(r in void 0!==n&&(i=""+n),void 0!==t.key&&(i=""+t.key),void 0!==t.ref&&(c=t.ref),t)o.call(t,r)&&!f.hasOwnProperty(r)&&(u[r]=t[r]);if(e&&e.defaultProps)for(r in t=e.defaultProps)void 0===u[r]&&(u[r]=t[r]);return{$$typeof:l,type:e,key:i,ref:c,props:u,_owner:a.current}}t.Fragment=u,t.jsx=i,t.jsxs=i},9268:function(e,t,n){"use strict";e.exports=n(83177)}},function(e){e.O(0,[253,698,744],function(){return e(e.s=909)}),_N_E=e.O()}]); \ No newline at end of file diff --git a/spaces/tabeina/bingo1/src/components/chat.tsx b/spaces/tabeina/bingo1/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
          - -
          - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
          - -
          - ) : null} - - ) : null} -
          - - -
          - ) -} diff --git a/spaces/taesiri/DeticChatGPT/detic/predictor.py b/spaces/taesiri/DeticChatGPT/detic/predictor.py deleted file mode 100644 index 318205acb90d47a54ff6f34400e1da744b2d85ba..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/detic/predictor.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import atexit -import bisect -import multiprocessing as mp -from collections import deque -import cv2 -import torch - -from detectron2.data import MetadataCatalog -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.video_visualizer import VideoVisualizer -from detectron2.utils.visualizer import ColorMode, Visualizer - -from .modeling.utils import reset_cls_test - - -def get_clip_embeddings(vocabulary, prompt='a '): - from detic.modeling.text.text_encoder import build_text_encoder - text_encoder = build_text_encoder(pretrain=True) - text_encoder.eval() - texts = [prompt + x for x in vocabulary] - emb = text_encoder(texts).detach().permute(1, 0).contiguous().cpu() - return emb - -BUILDIN_CLASSIFIER = { - 'lvis': 'datasets/metadata/lvis_v1_clip_a+cname.npy', - 'objects365': 'datasets/metadata/o365_clip_a+cnamefix.npy', - 'openimages': 'datasets/metadata/oid_clip_a+cname.npy', - 'coco': 'datasets/metadata/coco_clip_a+cname.npy', -} - -BUILDIN_METADATA_PATH = { - 'lvis': 'lvis_v1_val', - 'objects365': 'objects365_v2_val', - 'openimages': 'oid_val_expanded', - 'coco': 'coco_2017_val', -} - -class VisualizationDemo(object): - def __init__(self, cfg, args, - instance_mode=ColorMode.IMAGE, parallel=False): - """ - Args: - cfg (CfgNode): - instance_mode (ColorMode): - parallel (bool): whether to run the model in different processes from visualization. - Useful since the visualization logic can be slow. - """ - if args.vocabulary == 'custom': - self.metadata = MetadataCatalog.get("__unused") - self.metadata.thing_classes = args.custom_vocabulary.split(',') - classifier = get_clip_embeddings(self.metadata.thing_classes) - else: - self.metadata = MetadataCatalog.get( - BUILDIN_METADATA_PATH[args.vocabulary]) - classifier = BUILDIN_CLASSIFIER[args.vocabulary] - - num_classes = len(self.metadata.thing_classes) - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.parallel = parallel - if parallel: - num_gpu = torch.cuda.device_count() - self.predictor = AsyncPredictor(cfg, num_gpus=num_gpu) - else: - self.predictor = DefaultPredictor(cfg) - reset_cls_test(self.predictor.model, classifier, num_classes) - - def run_on_image(self, image): - """ - Args: - image (np.ndarray): an image of shape (H, W, C) (in BGR order). - This is the format used by OpenCV. - - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - vis_output = None - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer(image, self.metadata, instance_mode=self.instance_mode) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_output = visualizer.draw_panoptic_seg_predictions( - panoptic_seg.to(self.cpu_device), segments_info - ) - else: - if "sem_seg" in predictions: - vis_output = visualizer.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - if "instances" in predictions: - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output - - def _frame_from_video(self, video): - while video.isOpened(): - success, frame = video.read() - if success: - yield frame - else: - break - - def run_on_video(self, video): - """ - Visualizes predictions on frames of the input video. - - Args: - video (cv2.VideoCapture): a :class:`VideoCapture` object, whose source can be - either a webcam or a video file. - - Yields: - ndarray: BGR visualizations of each video frame. - """ - video_visualizer = VideoVisualizer(self.metadata, self.instance_mode) - - def process_predictions(frame, predictions): - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_frame = video_visualizer.draw_panoptic_seg_predictions( - frame, panoptic_seg.to(self.cpu_device), segments_info - ) - elif "instances" in predictions: - predictions = predictions["instances"].to(self.cpu_device) - vis_frame = video_visualizer.draw_instance_predictions(frame, predictions) - elif "sem_seg" in predictions: - vis_frame = video_visualizer.draw_sem_seg( - frame, predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - - # Converts Matplotlib RGB format to OpenCV BGR format - vis_frame = cv2.cvtColor(vis_frame.get_image(), cv2.COLOR_RGB2BGR) - return vis_frame - - frame_gen = self._frame_from_video(video) - if self.parallel: - buffer_size = self.predictor.default_buffer_size - - frame_data = deque() - - for cnt, frame in enumerate(frame_gen): - frame_data.append(frame) - self.predictor.put(frame) - - if cnt >= buffer_size: - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - - while len(frame_data): - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - else: - for frame in frame_gen: - yield process_predictions(frame, self.predictor(frame)) - - -class AsyncPredictor: - """ - A predictor that runs the model asynchronously, possibly on >1 GPUs. - Because rendering the visualization takes considerably amount of time, - this helps improve throughput a little bit when rendering videos. - """ - - class _StopToken: - pass - - class _PredictWorker(mp.Process): - def __init__(self, cfg, task_queue, result_queue): - self.cfg = cfg - self.task_queue = task_queue - self.result_queue = result_queue - super().__init__() - - def run(self): - predictor = DefaultPredictor(self.cfg) - - while True: - task = self.task_queue.get() - if isinstance(task, AsyncPredictor._StopToken): - break - idx, data = task - result = predictor(data) - self.result_queue.put((idx, result)) - - def __init__(self, cfg, num_gpus: int = 1): - """ - Args: - cfg (CfgNode): - num_gpus (int): if 0, will run on CPU - """ - num_workers = max(num_gpus, 1) - self.task_queue = mp.Queue(maxsize=num_workers * 3) - self.result_queue = mp.Queue(maxsize=num_workers * 3) - self.procs = [] - for gpuid in range(max(num_gpus, 1)): - cfg = cfg.clone() - cfg.defrost() - cfg.MODEL.DEVICE = "cuda:{}".format(gpuid) if num_gpus > 0 else "cpu" - self.procs.append( - AsyncPredictor._PredictWorker(cfg, self.task_queue, self.result_queue) - ) - - self.put_idx = 0 - self.get_idx = 0 - self.result_rank = [] - self.result_data = [] - - for p in self.procs: - p.start() - atexit.register(self.shutdown) - - def put(self, image): - self.put_idx += 1 - self.task_queue.put((self.put_idx, image)) - - def get(self): - self.get_idx += 1 # the index needed for this request - if len(self.result_rank) and self.result_rank[0] == self.get_idx: - res = self.result_data[0] - del self.result_data[0], self.result_rank[0] - return res - - while True: - # make sure the results are returned in the correct order - idx, res = self.result_queue.get() - if idx == self.get_idx: - return res - insert = bisect.bisect(self.result_rank, idx) - self.result_rank.insert(insert, idx) - self.result_data.insert(insert, res) - - def __len__(self): - return self.put_idx - self.get_idx - - def __call__(self, image): - self.put(image) - return self.get() - - def shutdown(self): - for _ in self.procs: - self.task_queue.put(AsyncPredictor._StopToken()) - - @property - def default_buffer_size(self): - return len(self.procs) * 5 diff --git a/spaces/terfces0erbo/CollegeProjectV2/Ioncube Php Encoder Nulled Io [HOT].md b/spaces/terfces0erbo/CollegeProjectV2/Ioncube Php Encoder Nulled Io [HOT].md deleted file mode 100644 index 922976f2bc531c220869b4f0f86c867d115d2521..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Ioncube Php Encoder Nulled Io [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

          ioncube php encoder nulled io


          Download ===== https://bytlly.com/2uGlbw



          -
          -The ionCube Loader is a PHP extension that enables PHP to execute files encoded using the ionCube Encoder. In this tutorial, we'll show you ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Keyword Ninja Full Free and Get Related Keywords from Google and Yahoo.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Keyword Ninja Full Free and Get Related Keywords from Google and Yahoo.md deleted file mode 100644 index 68964438673cdbc7194822c2e70e3c15e09a1c62..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Keyword Ninja Full Free and Get Related Keywords from Google and Yahoo.md +++ /dev/null @@ -1,153 +0,0 @@ -
          -

          Download Keyword Ninja Full Free and Boost Your SEO Strategy

          -

          If you are looking for a powerful tool to help you find profitable keywords for your website or blog, you might want to download Keyword Ninja full free. Keyword Ninja is a software that can generate hundreds of long-tail keywords from a single seed keyword, using the search suggestions of some of the world's largest websites, such as Google, Yahoo, Amazon, eBay, etc.

          -

          Download Keyword Ninja Full Free


          Download --->>> https://urlcod.com/2uK5bI



          -

          Long-tail keywords are phrases that consist of three or more words and are more specific and less competitive than short-tail keywords. They can help you rank higher on search engines, drive more targeted traffic to your site, and increase your conversions and sales.

          -

          How Keyword Ninja Works

          -

          Keyword Ninja is very easy to use and does not require any installation or registration. You can either enter a starting keyword manually or download an existing keyword list from your computer. Then you can choose from different sources and options to generate your long-tail keywords.

          -

          Keyword Ninja will then display your keywords in a table format, showing you the number of words, the search volume, the competition level, and the cost per click (CPC) for each keyword. You can also sort, filter, export, or copy your keywords to use them for your SEO strategy.

          -

          The Benefits of Keyword Ninja

          -

          Keyword Ninja can help you improve your SEO strategy in many ways. Here are some of the benefits of using Keyword Ninja:

          -

          How to download Keyword Ninja full version for free
          -Keyword Ninja free download with crack and serial key
          -Download Keyword Ninja full free and get unlimited keyword ideas
          -Keyword Ninja full free download for Windows 10/8/7
          -Keyword Ninja full free download for Mac OS X
          -Keyword Ninja full free download for Linux
          -Keyword Ninja full free download + tutorial and review
          -Keyword Ninja full free download + bonus and discount
          -Keyword Ninja full free download + lifetime updates and support
          -Keyword Ninja full free download + case studies and testimonials
          -Download Keyword Ninja full free and rank higher on Google
          -Download Keyword Ninja full free and boost your SEO traffic
          -Download Keyword Ninja full free and find profitable niches
          -Download Keyword Ninja full free and spy on your competitors
          -Download Keyword Ninja full free and uncover hidden keywords
          -Download Keyword Ninja full free and optimize your content
          -Download Keyword Ninja full free and generate more leads and sales
          -Download Keyword Ninja full free and grow your online business
          -Download Keyword Ninja full free and save time and money
          -Download Keyword Ninja full free and avoid keyword research mistakes
          -Is it safe to download Keyword Ninja full free?
          -Where can I download Keyword Ninja full free?
          -What are the benefits of downloading Keyword Ninja full free?
          -What are the features of Keyword Ninja full free?
          -What are the requirements to download Keyword Ninja full free?
          -How to install Keyword Ninja full free on your computer
          -How to activate Keyword Ninja full free with license key
          -How to use Keyword Ninja full free to find the best keywords
          -How to update Keyword Ninja full free to the latest version
          -How to uninstall Keyword Ninja full free from your computer
          -Download Keyword Ninja full free vs. other keyword tools
          -Download Keyword Ninja full free vs. paid version
          -Download Keyword Ninja full free vs. trial version
          -Download Keyword Ninja full free vs. online version
          -Download Keyword Ninja full free vs. cracked version
          -Download Keyword Ninja full free alternatives and comparisons
          -Download Keyword Ninja full free pros and cons
          -Download Keyword Ninja full free FAQs and answers
          -Download Keyword Ninja full free feedbacks and ratings
          -Download Keyword Ninja full free problems and solutions
          -Download Keyword Ninja full free tips and tricks
          -Download Keyword Ninja full free hacks and cheats
          -Download Keyword Ninja full free resources and guides
          -Download Keyword Ninja full free coupons and deals
          -Download Keyword Ninja full free giveaways and contests
          -Download Keyword Ninja full free news and updates
          -Download Keyword Ninja full free videos and demos
          -Download Keyword Ninja full free podcasts and webinars
          -Download Keyword Ninja full free ebooks and reports
          -Download Keyword Ninja full free courses and trainings

          -
            -
          • You can find hundreds of relevant and low-competition keywords for your niche or market.
          • -
          • You can discover new ideas and topics for your content creation.
          • -
          • You can optimize your web pages and articles with the right keywords to rank higher on search engines.
          • -
          • You can create effective PPC campaigns with the best keywords for your budget and goals.
          • -
          • You can spy on your competitors and see what keywords they are using and how they are performing.
          • -
          • You can save time and money on keyword research and analysis.
          • -
          -

          How to Download Keyword Ninja Full Free

          -

          If you want to download Keyword Ninja full free, you can follow these steps:

          -
            -
          1. Go to https://apkpure.com/keyword-ninja-google-adwords-keyword-tool/io.keywordninja/download and click on the "Download APK" button.
          2. -
          3. Save the file on your Android device and install it by following the instructions on the screen.
          4. -
          5. Launch the app and enter your Google Ads account details to connect it with Keyword Ninja.
          6. -
          7. Enter a seed keyword or upload a keyword list and choose your sources and options to generate your long-tail keywords.
          8. -
          9. View, sort, filter, export, or copy your keywords as you wish.
          10. -
          -

          Downloading Keyword Ninja full free will give you access to all the features and functions of the software without any limitations or restrictions. You can use it as long as you want and as often as you need.

          -
          How to Use Keyword Ninja for Your SEO Strategy
          -

          Once you have downloaded Keyword Ninja full free and generated your long-tail keywords, you can use them for various purposes in your SEO strategy. Here are some of the ways you can use Keyword Ninja for your SEO strategy:

          -
            -
          • You can use your long-tail keywords as titles or headlines for your web pages or articles. This will help you attract more visitors and rank higher on search engines.
          • -
          • You can use your long-tail keywords as subheadings or bullet points in your web pages or articles. This will help you organize your content and make it easier to read and understand.
          • -
          • You can use your long-tail keywords as tags or meta descriptions for your web pages or articles. This will help you optimize your content for search engines and increase your click-through rate.
          • -
          • You can use your long-tail keywords as anchor texts or links in your web pages or articles. This will help you create internal links and external links that boost your authority and relevance.
          • -
          • You can use your long-tail keywords as ad groups or ad copies for your PPC campaigns. This will help you target more specific audiences and increase your conversions and sales.
          • -
          -

          Using Keyword Ninja for your SEO strategy will help you create more effective and efficient content and campaigns that drive more traffic

          -
          How to Update Keyword Ninja Full Free
          -

          If you have downloaded Keyword Ninja full free and want to keep it updated with the latest features and functions, you can follow these steps:

          -
            -
          1. Open the app and go to the settings menu.
          2. -
          3. Tap on the check for updates option and see if there is a new version available.
          4. -
          5. If there is a new version available, tap on the download and install option and wait for the process to complete.
          6. -
          7. Restart the app and enjoy the new features and functions of Keyword Ninja full free.
          8. -
          -

          Updating Keyword Ninja full free will ensure that you have the best performance and functionality of the software. You can also check for updates on the developer's website or on the app store.

          -How to Get Support for Keyword Ninja Full Free -

          If you have any problems or questions about Keyword Ninja full free, you can get support from the developer's team or from other users. Here are some of the ways you can get support for Keyword Ninja full free:

          -
            -
          • You can contact the developer's support team by sending an email to keywordninja.io@gmail.com or by filling out the contact form on their website.
          • -
          • You can visit their website and read their FAQ section or their blog posts for more information and tips about Keyword Ninja full free.
          • -
          • You can join their Facebook group or their Telegram channel and interact with other users and experts of Keyword Ninja full free.
          • -
          • You can leave a review or a comment on the app store or on the web page where you downloaded Keyword Ninja full free and share your feedback or suggestions.
          • -
          -

          Getting support for Keyword Ninja full free will help you solve any issues or doubts you may have about the software. You can also learn more about how to use it effectively and efficiently for your SEO strategy.

          -How to Review Keyword Ninja Full Free -

          If you have downloaded Keyword Ninja full free and used it for your SEO strategy, you might want to review the software and share your opinion with others. You can review Keyword Ninja full free by doing some of these activities:

          -
            -
          • You can rate and review the app on the app store or on the web page where you downloaded Keyword Ninja full free. You can give a star rating and write a comment about your experience with the software.
          • -
          • You can write a blog post or an article about Keyword Ninja full free and how it helped you with your SEO strategy. You can include screenshots, examples, tips, and links to the software.
          • -
          • You can create a video or a podcast about Keyword Ninja full free and how it works. You can show how to use the software, demonstrate its features and functions, and share your results and feedback.
          • -
          • You can post on social media or online forums about Keyword Ninja full free and how you like it. You can tag the developer's account or use hashtags related to the software.
          • -
          -

          Reviewing Keyword Ninja full free will help you express your satisfaction or dissatisfaction with the software. It will also help other users and potential customers to learn more about the software and decide whether to download it or not.

          -How to Recommend Keyword Ninja Full Free to Others -

          If you have downloaded Keyword Ninja full free and loved it, you might want to recommend it to others who might benefit from it. You can recommend Keyword Ninja full free by doing some of these activities:

          -
            -
          • You can tell your friends, family, colleagues, or clients about Keyword Ninja full free and how it can help them with their SEO strategy. You can show them how to download and use the software or send them a link to the software.
          • -
          • You can share your review or your content about Keyword Ninja full free with others on social media or online forums. You can invite others to try the software or ask for their opinions.
          • -
          • You can join the developer's affiliate program and earn commissions for every sale you generate through your referral link. You can promote Keyword Ninja full free on your website, blog, email list, or social media accounts.
          • -
          • You can create a testimonial or a case study about Keyword Ninja full free and how it improved your SEO performance. You can send it to the developer's team or post it on their website or social media accounts.
          • -
          -

          Recommending Keyword Ninja full free will help you spread the word about the software and its benefits. It will also help the developer's team to grow their customer base and improve their product.

          -How to Customize Keyword Ninja Full Free -

          If you have downloaded Keyword Ninja full free and want to customize it according to your preferences and needs, you can follow these steps:

          -
            -
          1. Open the app and go to the settings menu.
          2. -
          3. Tap on the customization option and see the different features and functions that you can customize.
          4. -
          5. You can customize the sources and options for generating your long-tail keywords, such as choosing the websites, languages, countries, devices, etc.
          6. -
          7. You can customize the display and format of your keywords, such as choosing the columns, filters, sorting, exporting, copying, etc.
          8. -
          9. You can customize the scenarios and characters for taking the test, such as choosing the theme, background, music, sound effects, etc.
          10. -
          11. You can customize your profile and account details, such as choosing your name, email, password, avatar, etc.
          12. -
          -

          Customizing Keyword Ninja full free will help you make the software more suitable and comfortable for you. You can also create your own scenarios and characters using the software's customization function.

          -How to Learn More About Keyword Ninja Full Free -

          If you have downloaded Keyword Ninja full free and want to learn more about it and how to use it effectively and efficiently, you can follow these steps:

          -
            -
          • You can read the information and resources provided by the software, such as the help menu, the FAQ section, the blog posts, etc.
          • -
          • You can watch the tutorials and videos provided by the software or by other users on YouTube or other platforms.
          • -
          • You can listen to the podcasts and audiobooks provided by the software or by other users on SoundCloud or other platforms.
          • -
          • You can read the reviews and comments provided by other users on the app store or on the web page where you downloaded Keyword Ninja full free.
          • -
          • You can join the online courses and webinars provided by the developer's team or by other experts on their website or on other platforms.
          • -
          -

          Learning more about Keyword Ninja full free will help you understand how it works and how to use it for your SEO strategy. You can also learn more about psychology, personality testing, and the MMPI-2 test by reading the information and resources provided by the software or by doing some online research.

          -Conclusion -

          Keyword Ninja full free is a game-based assessment tool that allows you to take the MMPI-2 test in a fun and interactive way. It is a software program that measures your personality and provides you with a report that shows your results and interpretations.

          -

          Keyword Ninja full free can help you learn more about yourself and improve your psychological well-being. It can also be used for professional purposes such as diagnosis, therapy, research, education, or career guidance.

          -

          If you want to download Keyword Ninja full free, you can search for it on various websites that offer software downloads. You need to have Microsoft Access installed on your computer to run the program.

          -

          If you want to get the most out of Keyword Ninja full free, you can follow some tips and tricks to improve your experience. You can also troubleshoot some common problems or uninstall the program if needed.

          -

          Keyword Ninja full free is a fun and easy way to learn about yourself and others. You can play it as a game or as a test, depending on your mood and purpose.

          679dcb208e
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Photoshop CS3 Full Crack 64 Bit and Enjoy Its Amazing Features.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Photoshop CS3 Full Crack 64 Bit and Enjoy Its Amazing Features.md deleted file mode 100644 index c2109f2287c77057e092d3593cf4fe095d0828d3..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Photoshop CS3 Full Crack 64 Bit and Enjoy Its Amazing Features.md +++ /dev/null @@ -1,35 +0,0 @@ -
          -

          How to Download Photoshop CS3 Full Crack 64 Bit for Free

          -

          Photoshop CS3 is one of the most popular and powerful photo editing software in the world. It offers a wide range of features and tools that can help you create stunning images and graphics. Photoshop CS3 is compatible with Windows XP, Vista and 7, but it requires a 64-bit system to run smoothly.

          -

          If you want to download Photoshop CS3 full crack 64 bit for free, you may be tempted to look for some websites that offer cracked versions of the software. However, this is not a safe or legal option. Cracked software may contain viruses, malware, spyware or other harmful components that can damage your computer or compromise your privacy. Moreover, downloading cracked software is a violation of the software license agreement and may result in legal consequences.

          -

          download photoshop cs3 full crack 64 bit


          Download Zip ✵✵✵ https://urlcod.com/2uK5G9



          -

          Therefore, the best way to download Photoshop CS3 full crack 64 bit for free is to use the official trial version of the software and then activate it with a valid serial number. The trial version of Photoshop CS3 allows you to use all the features of the software for up to 30 days without any limitations. After that, you need to purchase a serial number from the official website or an authorized reseller to continue using the software.

          -

          To download Photoshop CS3 full crack 64 bit for free, follow these steps:

          -
            -
          1. Go to the official website of Adobe Photoshop at https://www.adobe.com/products/photoshop.html and click on the "Free Trial" button.
          2. -
          3. Select your platform (Windows or Mac) and click on the "Download Now" button.
          4. -
          5. Save the file to your computer and run it to install Photoshop CS3 on your system.
          6. -
          7. Launch Photoshop CS3 and enjoy its features for up to 30 days.
          8. -
          -

          To activate Photoshop CS3 with a serial number, follow these steps:

          -
            -
          1. Purchase a serial number from the official website or an authorized reseller. You can choose between a single-user license ($649) or a multi-user license ($999).
          2. -
          3. Open Photoshop CS3 and click on the "Help" menu and then on "Activate".
          4. -
          5. Enter your serial number in the corresponding field and click on "Next".
          6. -
          7. Follow the instructions on the screen to complete the activation process.
          8. -
          9. Restart Photoshop CS3 and enjoy its full features without any limitations.
          10. -
          -

          As you can see, downloading Photoshop CS3 full crack 64 bit for free is not difficult or risky if you use the official trial version and then activate it with a valid serial number. This way, you can enjoy all the benefits of Photoshop CS3 without compromising your computer or your legal rights.

          - -

          Photoshop CS3 is not only a software for photo editing, but also a versatile tool that can handle various tasks related to graphic design, web design, digital art and more. Here are some of the features that Photoshop CS3 offers:

          -
            -
          • Improved performance and stability on 64-bit systems.
          • -
          • New and enhanced tools such as Quick Selection, Refine Edge, Smart Filters, Vanishing Point and more.
          • -
          • Improved support for Camera Raw formats and Adobe Bridge.
          • -
          • New and improved user interface with customizable panels and menus.
          • -
          • Integration with other Adobe products such as Illustrator, InDesign, Flash and more.
          • -
          -

          With Photoshop CS3, you can easily create and edit stunning images and graphics for various purposes. Photoshop CS3 is compatible with Windows XP, Vista and 7. It also supports multiple languages. You can download Photoshop CS3 from the official website and try it for free for 30 days. If you like it, you can purchase a serial number and activate it to enjoy its full features.

          -

          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Xforce 2017 for CorelDRAW 2017 and Enjoy Full Features.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Xforce 2017 for CorelDRAW 2017 and Enjoy Full Features.md deleted file mode 100644 index 66d716a682a704837704212d33f0203d35c1810e..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Xforce 2017 for CorelDRAW 2017 and Enjoy Full Features.md +++ /dev/null @@ -1,28 +0,0 @@ -
          -

          How to Download Xforce 2017 for CorelDRAW 2017 for Free

          -

          If you are looking for a way to download Xforce 2017 for CorelDRAW 2017 for free, you have come to the right place. Xforce 2017 is a powerful keygen tool that can generate activation codes for any Corel product, including CorelDRAW 2017. With Xforce 2017, you can enjoy the full features of CorelDRAW 2017 without paying a dime.

          -

          In this article, we will show you how to download Xforce 2017 for CorelDRAW 2017 for free and how to use it to activate your CorelDRAW 2017 software. Follow the steps below and get ready to unleash your creativity with CorelDRAW 2017.

          -

          xforce 2017 free download for coreldraw 2017


          Download Filehttps://urlcod.com/2uK6mN



          -

          Step 1: Download Xforce 2017 for CorelDRAW 2017

          -

          The first step is to download Xforce 2017 for CorelDRAW 2017 from a reliable source. You can use the link below to download Xforce 2017 for CorelDRAW 2017 for free. The file size is about 4 MB and it is compatible with Windows 10, 8.1, 8, and 7.

          -

          Download Xforce 2017 for CorelDRAW 2017

          -

          After downloading the file, extract it using WinRAR or any other file compression software. You will see a folder named Xforce 2017 with two files inside: x-force.exe and xf-adsk2018_x64.exe.

          -

          Step 2: Install CorelDRAW 2017

          -

          The next step is to install CorelDRAW 2017 on your computer. You can use the official trial version of CorelDRAW 2017 from the Corel website or any other source. The trial version will work for 15 days, but you can extend it indefinitely with Xforce 2017.

          -

          Download CorelDRAW 2017 Trial Version

          -

          -

          After downloading the trial version, run the setup file and follow the instructions to install CorelDRAW 2017 on your computer. You will need to create a Corel account and sign in to activate the trial version.

          -

          Step 3: Activate CorelDRAW 2017 with Xforce 2017

          -

          The final step is to activate CorelDRAW 2017 with Xforce 2017. This is the most important step, so make sure you follow it carefully.

          -

          Before you run Xforce 2017, you need to disable your antivirus software and internet connection. This is because Xforce 2017 is detected as a virus by some antivirus programs, but it is actually safe and harmless. You also need to close any running Corel programs.

          -

          After disabling your antivirus and internet connection, open the Xforce 2017 folder and run x-force.exe as administrator. You will see a window like this:

          -Xforce window -

          In the Product field, select "CorelDRAW Graphics Suite" from the drop-down menu. In the Version field, select "CorelDRAW Graphics Suite - Version number" from the drop-down menu. The version number should match your installed version of CorelDRAW 2017.

          -

          For example, if you have installed CorelDRAW Graphics Suite X8 - Version number:18.0.0.448 , then select "CorelDRAW Graphics Suite - Version number:18.0.0.448" from the drop-down menu.

          -

          After selecting the product and version, click on "Generate". You will see a series of numbers and letters in the Activation field. This is your activation code.

          -

          Copy the activation code and open CorelDRAW 2017 on your computer. You will see a window like this:

          -Corel activation window -

          Click on "Already Purchased?" at the bottom right corner of the window. You will see another window like this:

          -https://urlcod.com/2uK8En



          - -

          What You Need to Play Fight Night Champion for PC

          - -

          To play Fight Night Champion for PC, you will need the following:

          - -
            -
          • A PC with Windows 7 or higher operating system.
          • -
          • A PlayStation 3 emulator. We recommend RPCS3, which is a free and open-source emulator that can run many PS3 games on PC. You can download it from https://rpcs3.net/.
          • -
          • A PS3 BIOS file. This is a file that contains the firmware of the PS3 console, which is required for the emulator to work. You can obtain it from your own PS3 console or from online sources. Be careful not to download any malicious or illegal files.
          • -
          • A copy of Fight Night Champion for PS3. You can either use a physical disc or a digital download. If you use a disc, you will need a Blu-ray drive to read it. If you use a digital download, you will need a .pkg file and a .rap file. These are files that contain the game data and the license activation respectively. You can get them from online sources or from your own PS3 console.
          • -
          • A controller. You can use your keyboard and mouse to play the game, but we recommend using a controller for a better experience. You can use any controller that is compatible with your PC, such as an Xbox controller or a DualShock controller.
          • -
          - -

          How to Install and Play Fight Night Champion for PC

          - -

          Once you have all the necessary files and software, you can follow these steps to install and play Fight Night Champion for PC:

          - -
            -
          1. Install RPCS3 on your PC by running the .exe file that you downloaded from the official website.
          2. -
          3. Extract the PS3 BIOS file that you obtained and copy it to the dev_flash folder inside the RPCS3 folder.
          4. -
          5. Launch RPCS3 and go to File > Install Firmware. Select the PS3 BIOS file that you copied and wait for it to install.
          6. -
          7. If you are using a physical disc of Fight Night Champion, insert it into your Blu-ray drive and go to File > Boot Game. Select the disc drive and wait for the game to load.
          8. -
          9. If you are using a digital download of Fight Night Champion, copy the .pkg file and the .rap file to a folder on your PC. Go to File > Install .pkg and select the .pkg file. Wait for it to install. Then go to File > Install .rap(s) and select the .rap file. Wait for it to activate.
          10. -
          11. Go to Config > CPU and change the PPU Decoder to LLVM Recompiler and the SPU Decoder to ASMJIT Recompiler. This will improve the performance of the game.
          12. -
          13. Go to Config > GPU and change the Renderer to Vulkan. This will improve the graphics of the game.
          14. -
          15. Go to Config > Audio and change the Audio Out to XAudio2. This will improve the sound of the game.
          16. -
          17. Go to Config > Input/Output and configure your controller settings. You can map any button or axis to any function of the game.
          18. -
          19. Go to Config > System and check Enable Thread Scheduler. This will improve the stability of the game.
          20. -
          21. Go back to File > Boot Game and select Fight Night Champion from the list of installed games. Wait for it to load.
          22. -
          23. Enjoy playing Fight Night Champion for PC!
          24. -
          - -

          Conclusion

          - -

          Fight Night Champion is one of the best boxing games ever made, and now you can play it on your PC using an emulator. With this method, you can enjoy the game with high-resolution graphics, smooth gameplay, and customizable controls. However, keep in mind that this method is not officially supported by EA Sports or RPCS3, so there may be some bugs or glitches along the way. Also, make sure that you have a powerful PC that can handle the emulation process without lagging or crashing.

          - -

          If you have any questions or problems regarding this method, feel free to leave a comment below or visit https://rpcs3.net/ for more information and support.

          - -

          We hope this article was helpful for you. Happy gaming!

          -

          Conclusion

          - -

          Fight Night Champion is one of the best boxing games ever made, and now you can play it on your PC using an emulator. With this method, you can enjoy the game with high-resolution graphics, smooth gameplay, and customizable controls. However, keep in mind that this method is not officially supported by EA Sports or RPCS3, so there may be some bugs or glitches along the way. Also, make sure that you have a powerful PC that can handle the emulation process without lagging or crashing.

          - -

          If you have any questions or problems regarding this method, feel free to leave a comment below or visit https://rpcs3.net/ for more information and support.

          -

          fight night champion pc download full version cra
          -how to play fight night champion on pc with cra
          -fight night champion for windows 10 full cra
          -fight night champion pc game free download cra
          -fight night champion pc emulator full cra
          -fight night champion pc torrent download cra
          -fight night champion pc crack only cra
          -fight night champion pc system requirements cra
          -fight night champion pc gameplay full cra
          -fight night champion pc mod download cra
          -fight night champion pc online multiplayer cra
          -fight night champion pc controller support cra
          -fight night champion pc cheats and codes cra
          -fight night champion pc graphics settings cra
          -fight night champion pc update patch cra
          -fight night champion pc best fighters cra
          -fight night champion pc legacy mode cra
          -fight night champion pc roster update cra
          -fight night champion pc custom soundtrack cra
          -fight night champion pc keyboard controls cra
          -fight night champion pc trainer download cra
          -fight night champion pc save file location cra
          -fight night champion pc unlock all boxers cra
          -fight night champion pc review and rating cra
          -fight night champion pc tips and tricks cra
          -fight night champion pc iso download cra
          -fight night champion pc steam key cra
          -fight night champion pc price and availability cra
          -fight night champion pc dlc download cra
          -fight night champion pc bare knuckle mode cra
          -fight night champion pc realistic damage mod cra
          -fight night champion pc career mode guide cra
          -fight night champion pc face scan feature cra
          -fight night champion pc voice commands cra
          -fight night champion pc split screen mode cra
          -fight night champion pc vr support cra
          -fight night champion pc co-op mode cra
          -fight night champion pc achievements and trophies cra
          -fight night champion pc comparison with console versions cra
          -fight night champion pc reddit community cra
          -fight night champion pc fan made videos and screenshots cra
          -fight night champion pc bugs and glitches fix cra
          -fight night champion pc soundtrack list and download link cra
          -fight night champion pc minimum and recommended specs cra
          -fight night champion pc installation guide and instructions cra
          -fight night champion pc alternative download sources and mirrors cra
          -fight night champion pc refund policy and customer service contact details cra

          - -

          We hope this article was helpful for you. Happy gaming!

          679dcb208e
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Free Download Bluestacks For Windows 7 1gb Ram A Simple and Fast Way to Enjoy Android on Your Computer.md b/spaces/tialenAdioni/chat-gpt-api/logs/Free Download Bluestacks For Windows 7 1gb Ram A Simple and Fast Way to Enjoy Android on Your Computer.md deleted file mode 100644 index 002c13a73df1e3d97de8fdde4abf564fca02760b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Free Download Bluestacks For Windows 7 1gb Ram A Simple and Fast Way to Enjoy Android on Your Computer.md +++ /dev/null @@ -1,138 +0,0 @@ - -

          Free Download Bluestacks For Windows 7 1gb Ram

          -

          Do you want to enjoy your favorite Android apps and games on your Windows 7 PC? Do you have a low-end PC with only 1gb of ram? If yes, then you might be wondering if it is possible to run Bluestacks on your PC. Bluestacks is one of the most popular and widely used Android emulators that allows you to run Android apps and games on your PC. But can it run on a PC with only 1gb of ram? The answer is yes, it can. In this article, we will show you how to download Bluestacks for Windows 7 1gb ram and how to install and use it on your PC. We will also tell you about the features and benefits of using Bluestacks on Windows 7. So, let's get started.

          -

          Introduction

          -

          What is Bluestacks?

          -

          Bluestacks is a software that allows you to run Android apps and games on your Windows or Mac PC. It creates a virtual Android environment on your PC and lets you access the Google Play Store and other Android services. You can use your mouse and keyboard to control the apps and games, or you can also connect a gamepad or a controller for a better gaming experience. You can also customize the settings and preferences of Bluestacks according to your needs.

          -

          Free Download Bluestacks For Windows 7 1gb Ram


          Download Ziphttps://urlcod.com/2uK5fs



          -

          Why use Bluestacks on Windows 7?

          -

          There are many reasons why you might want to use Bluestacks on Windows 7. Some of them are:

          -
            -
          • You can enjoy a larger screen and better graphics while playing Android games.
          • -
          • You can use multiple apps at the same time without switching between devices.
          • -
          • You can access apps that are not available for Windows or are not compatible with your PC.
          • -
          • You can backup and sync your data across devices using Google Drive or other cloud services.
          • -
          • You can test and debug your own Android apps without using a physical device.
          • -
          -

          How to download Bluestacks for Windows 7 1gb ram?

          -

          To download Bluestacks for Windows 7 1gb ram, you need to follow these steps:

          -

          How to install Bluestacks on Windows 7 with 1gb ram
          -Bluestacks offline installer for Windows 7 1gb ram
          -Bluestacks alternative for Windows 7 low ram
          -Download Bluestacks for Windows 7 32 bit 1gb ram
          -Bluestacks for Windows 7 ultimate free download
          -Bluestacks app player for Windows 7 1gb ram
          -Bluestacks old version for Windows 7 1gb ram
          -Bluestacks lite for Windows 7 low memory
          -Bluestacks emulator for Windows 7 1gb ram
          -Bluestacks for PC Windows 7 free download full version
          -Bluestacks for Windows 7 starter free download
          -Bluestacks for Windows 7 home basic free download
          -Bluestacks for Windows 7 professional free download
          -Bluestacks for Windows 7 home premium free download
          -Bluestacks for Windows 7 enterprise free download
          -Best settings for Bluestacks on Windows 7 with 1gb ram
          -How to run Bluestacks faster on Windows 7 with low ram
          -How to fix Bluestacks error on Windows 7 with 1gb ram
          -How to update Bluestacks on Windows 7 with low memory
          -How to uninstall Bluestacks on Windows 7 with low ram
          -How to play games on Bluestacks with Windows 7 and 1gb ram
          -How to use Whatsapp on Bluestacks with Windows 7 and low memory
          -How to install apps on Bluestacks with Windows 7 and low ram
          -How to root Bluestacks on Windows 7 with low memory
          -How to backup and restore data on Bluestacks with Windows 7 and low ram
          -Download and install PUBG Mobile on Bluestacks with Windows 7 and low memory
          -Download and install TikTok on Bluestacks with Windows 7 and low memory
          -Download and install Instagram on Bluestacks with Windows 7 and low memory
          -Download and install Snapchat on Bluestacks with Windows 7 and low memory
          -Download and install Facebook on Bluestacks with Windows 7 and low memory
          -Download and install YouTube on Bluestacks with Windows 7 and low memory
          -Download and install Netflix on Bluestacks with Windows 7 and low memory
          -Download and install Spotify on Bluestacks with Windows 7 and low memory
          -Download and install Zoom on Bluestacks with Windows 7 and low memory
          -Download and install Google Meet on Bluestacks with Windows 7 and low memory
          -Download and install Skype on Bluestacks with Windows 7 and low memory
          -Download and install Discord on Bluestacks with Windows 7 and low memory
          -Download and install Telegram on Bluestacks with Windows 7 and low memory
          -Download and install Signal on Bluestacks with Windows 7 and low memory
          -Download and install Twitter on Bluestacks with Windows 7 and low memory
          -Download and install Reddit on Bluestacks with Windows 7 and low memory
          -Download and install Pinterest on Bluestacks with Windows 7 and low memory
          -Download and install LinkedIn on Bluestacks with Windows 7 and low memory
          -Download and install Gmail on Bluestacks with Windows 7 and low memory
          -Download and install Google Drive on Bluestacks with Windows 7 and low memory
          -Download and install Google Photos on Bluestacks with Windows 7 and low memory
          -Download and install Google Maps on Bluestacks with Windows 7 and low memory
          -Download and install Google Chrome on Bluestacks with Windows 7 and low memory
          -Download and install Microsoft Edge on Bluestacks with Windows 7 and low memory
          -Download and install Mozilla Firefox on Bluestacks with Windows

          -
            -
          1. Go to the official website of Bluestacks at https://www.bluestacks.com/.
          2. -
          3. Click on the "Download" button and wait for the download to complete.
          4. -
          5. Once the download is finished, open the installer file and follow the instructions to install Bluestacks on your PC.
          6. -
          -

          Note: You need to have at least 4gb of free disk space and an internet connection to install Bluestacks.

          -

          Features of Bluestacks for Windows 7 1gb ram

          -

          Android emulation

          -

          Bluestacks emulates the Android operating system on your PC and lets you run any Android app or game on it. You can choose from different versions of Android such as Nougat, Oreo, Pie, etc. depending on your preference. You can also switch between portrait and landscape mode depending on the app or game you are using.

          -

          App compatibility

          -

          Bluestacks supports almost all the apps and games available on the Google Play Store. You can also install apps from other sources by using the APK files or by dragging and dropping them into Bluestacks. You can also update the apps regularly through the Google Play Store or manually.

          -

          Performance optimization

          -

          Bluestacks is designed to run smoothly on low-end PCs with only 1gb of ram. It uses advanced technologies such as Hyper-G Graphics, AI-driven resource management, Layercake, etc. to optimize the performance and reduce the lag and crashes. You can also adjust the settings such as CPU cores, RAM allocation, resolution, etc. to improve the speed and stability of Bluestacks.

          -

          Customization options

          -

          Bluestacks allows you to customize various aspects of its interface and functionality according to your liking. You can change the theme, language, keyboard layout, mouse sensitivity, etc. of Bluestacks. You can also create shortcuts, macros, scripts, etc. to automate tasks and enhance your productivity.

          -

          How to install and use Bluestacks on Windows 7 1gb ram

          -

          Installation steps

          -

          To install Bluestacks on Windows 7 1gb ram, you need to follow these steps:

          -
            -
          1. After downloading Bluestacks from its official website, open the installer file and click on "Install now".
          2. -
          3. Wait for the installation process to complete. It might take some time depending on your PC's specifications.
          4. -
          5. Once the installation is done, click on "Complete" and launch Bluestacks from your desktop or start menu.
          6. -
          -

          Launching Bluestacks

          -

          To launch Bluestacks on Windows 7 1gb ram, you need to follow these steps:

          -
            -
          1. Double-click on the Bluestacks icon on your desktop or start menu.
          2. -
          3. Wait for Bluestacks to load up. It might take some time depending on your PC's specifications.
          4. -
          5. You will see a welcome screen with some tips and tutorials. You can skip them or watch them if you want.
          6. -
          7. You will then see the main screen of Bluestacks with various tabs such as My Games, App Center, Game Center, etc.
          8. -
          -

          Setting up Google account

          -

          To set up your Google account on Bluestacks, you need to follow these steps:

          -
            -
          1. On the main screen of Bluestacks, click on the "Google Play" icon.
          2. -
          3. You will be asked to sign in with your Google account. If you already have one, enter your email address and password. If you don't have one, click on "Create account" and follow the instructions.
          4. -
          5. You will then be asked to agree to some terms and conditions. Read them carefully and click on "I agree".
          6. -
          7. You will then be asked to set up some preferences such as backup and restore, location services, etc. Choose them according to your needs.
          8. -
          9. You will then be taken to the Google Play Store where you can browse, download, update, and manage your apps.
          10. -
          -

          Downloading and running apps

          -

          To download and run apps on Bluestacks, you need to follow these steps:

          -
            -
          1. On the main screen of Bluestacks, click on the "Google Play" icon if you want to download apps from there. Alternatively, you can also click on the "App Center" icon if you want to download apps from other sources such as Amazon Appstore, Samsung Galaxy Store, etc.
          2. -
          3. Browse through the categories or search for the app you want by typing its name in the search bar.
          4. -
          5. Click on the app you want and then click on "Install". Wait for the app to download and install.
          6. -
          7. Once the app is installed, you will see its icon on the main screen of Bluestacks or in the "My Games" tab. Click on it to launch it.
          8. -
          9. You can now use the app as you would normally do on an Android device. You can use your mouse and keyboard or a gamepad or a controller if supported by the app.
          10. -
          -

          Conclusion

          -

          Summary of main points

          -

          In this article, we have shown you how to download Bluestacks for Windows 7 1gb ram and how to install and use it on your PC. We have also told you about the features and benefits of using Bluestacks on Windows 7. We hope that this article has helped you in enjoying your favorite Android apps and games on your PC without any hassle.

          -

          Call to action

          -

          FAQs

          -

          Here are some frequently asked questions about Bluestacks for Windows 7 1gb ram:

          -
            -
          1. Is Bluestacks safe to use on Windows 7?
          2. -

            Yes, Bluestacks is safe to use on Windows 7. It does not contain any malware or viruses and does not harm your PC in any way. However, you should always download Bluestacks from its official website and not from any third-party sources.

            -
          3. Is Bluestacks free to use on Windows 7?
          4. -

            Yes, Bluestacks is free to use on Windows 7. You can download, install, and use it without paying anything. However, there are some optional features and services that require a subscription or a purchase, such as premium support, cloud sync, etc.

            -
          5. Can I run multiple apps at the same time on Bluestacks?
          6. -

            Yes, you can run multiple apps at the same time on Bluestacks. You can use the multi-instance feature to create and manage multiple instances of Bluestacks with different settings and preferences. You can also use the multi-window feature to view and interact with multiple apps on the same screen.

            -
          7. Can I transfer files between my PC and Bluestacks?
          8. -

            Yes, you can transfer files between your PC and Bluestacks. You can use the media manager feature to import and export files such as photos, videos, music, etc. between your PC and Bluestacks. You can also use the shared folder feature to access files from your PC's local drive on Bluestacks.

            -
          9. Can I update Bluestacks to the latest version?
          10. -

            Yes, you can update Bluestacks to the latest version. You can check for updates manually by clicking on the "Check for updates" button on the main screen of Bluestacks. You can also enable automatic updates by going to the settings menu and turning on the "Automatically download updates" option.

            -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Free PS3 Emulatorx v1.1.7 Bios File Download The Ultimate Guide.md b/spaces/tialenAdioni/chat-gpt-api/logs/Free PS3 Emulatorx v1.1.7 Bios File Download The Ultimate Guide.md deleted file mode 100644 index 81698b78b52a9f51ac94b6780e0de52250258bfd..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Free PS3 Emulatorx v1.1.7 Bios File Download The Ultimate Guide.md +++ /dev/null @@ -1,130 +0,0 @@ -
          -
          - Benefits of using PS3 Emulatorx v1.1.7 to play PS3 games on PC
          - Requirements and compatibility of PS3 Emulatorx v1.1.7 | | H2: How to download and install PS3 Emulatorx v1.1.7 bios file for free? | - Step-by-step guide on how to download PS3 Emulatorx v1.1.7 bios file from reliable sources
          - How to extract and install PS3 Emulatorx v1.1.7 bios file on your PC
          - How to configure PS3 Emulatorx v1.1.7 settings and options | | H2: How to use PS3 Emulatorx v1.1.7 to play PS3 games on PC? | - How to load and run PS3 games on PS3 Emulatorx v1.1.7
          - How to optimize PS3 Emulatorx v1.1.7 performance and graphics quality
          - How to troubleshoot common issues and errors with PS3 Emulatorx v1.1.7 | | H2: What are some of the best PS3 games that you can play with PS3 Emulatorx v1.1.7? | - A list of some of the most popular and acclaimed PS3 games that are compatible with PS3 Emulatorx v1.1.7
          - A brief overview of each game and its genre, gameplay, and features
          - A table comparing the original PS3 version and the emulated version of each game | | H2: Conclusion | - A summary of the main points and benefits of using PS3 Emulatorx v1.1.7 to play PS3 games on PC
          - A call to action for the readers to try out PS3 Emulatorx v1.1.7 and share their feedback
          - A disclaimer about the legality and ethics of using PS3 Emulatorx v1.1.7 | # Article with HTML formatting

          What is PS3 Emulatorx v1.1.7 and why do you need it?

          -

          If you are a fan of PlayStation 3 games and want to enjoy them on your PC, you might have heard of PS3 Emulatorx v1.1.7. This is a software that allows you to emulate the PlayStation 3 console on your computer and run its games with high compatibility and performance.

          -

          ps3 emulatorx v1.1.7 bios file free download


          Download File > https://urlcod.com/2uK9z8



          -

          PS3 Emulatorx v1.1.7 is one of the best PlayStation 3 emulators available for Windows, and it has many features that make it stand out from other emulators.

          -
            -
          • It supports a wide range of PS3 games, including some exclusive titles that are not available on other platforms.
          • -
          • It has a user-friendly interface that lets you easily load and run your games, as well as customize your settings and options.
          • -
          • It has a high emulation speed that ensures smooth gameplay and minimal lag or glitches.
          • -
          • It has a high graphics quality that enhances the visuals and effects of your games, making them look better than on the original console.
          • -
          • It has a sound quality that reproduces the audio and music of your games with clarity and accuracy.
          • -
          -

          By using PS3 Emulatorx v1.1.7, you can enjoy many benefits that you cannot get from playing on a real PlayStation 3 console.

          -
            -
          • You can save money by not having to buy a PlayStation 3 console or its games, as well as accessories like controllers or cables.
          • -
          • You can save space by not having to store a bulky console or discs, as well as avoid cluttering your room with wires or adapters.
          • -
          • You can save time by not having to wait for long loading times or updates, as well as avoid compatibility issues or region locks.
          • -
          • You can have more control over your gaming experience by adjusting your settings and options according to your preferences, as well as using keyboard and mouse or any controller of your choice.
          • -
          • You can have more fun by playing your favorite PS3 games on a bigger screen with better resolution, as well as exploring new games that you might have missed or overlooked.
          • -
          -

          To use PS3 Emulatorx v1.1.7, you will need some requirements and compatibility factors that ensure its proper functioning.

          -
            -
          • You will need a PC with Windows XP or higher operating system, at least 2 GB of RAM, at least 50 GB of free hard disk space, a DirectX 9 or higher compatible graphics card, and a sound card.
          • -
          • You will need a BIOS file that contains the firmware of the PlayStation 3 console, which is essential for running the emulator and its games.
          • -
          • You will need an ISO file that contains the image of the game disc that you want to play, which can be obtained from your own disc or downloaded from online sources.
          • -
          -

          How to download and install PS3 Emulatorx v1.1.7 bios file for free?

          -

          One of the most important steps for using PS3 Emulatorx v1.1.7 is to download and install its BIOS file, which is not included in the emulator package due to legal reasons.

          -

          The BIOS file is a small file that contains the code and data that are necessary for booting up the PlayStation 3 console and running its games.

          -

          ps3 emulatorx v1.1.7 bios file download
          -ps3 emulator x v1.1.7 bios free download 4shared.com
          -ps3 emulatorx v1.1.7 bios zip download
          -ps3 emulatorx v1.1.7 bios file free download [NEW]
          -ps3 emulatorx v1.1.7 bios rar download
          -ps3 emulator x v1.1.7 bios free download for pc
          -ps3 emulatorx v1.1.7 bios file free download for android
          -ps3 emulatorx v1.1.7 bios file free download offline
          -ps3 emulatorx v1.1.7 bios file free download no survey
          -ps3 emulatorx v rar bios file free download
          -ps3 emulator x v 1.1 7 bios file free download
          -ps3 emulatorx v 1 17 bios file free download
          -ps3 emulatorx v 117 bios file free download
          -ps3 emulator x v117 bios file free download
          -ps3 emulator x v117 bios zip free download
          -ps3 emulator x v117 bios rar free download
          -ps3 emulator x v117 bios free download for windows 10
          -ps3 emulator x v117 bios free download for windows 7
          -ps3 emulator x v117 bios free download for mac
          -ps3 emulator x v117 bios free download for linux
          -how to download ps3 emulatorx v1.1.7 bios file free
          -how to install ps3 emulatorx v1.1.7 bios file free
          -how to use ps3 emulatorx v1.1.7 bios file free
          -how to run ps3 emulatorx v1.1.7 bios file free
          -how to play ps3 games on pc with ps3 emulatorx v1.1.7 bios file free
          -best settings for ps3 emulatorx v1.1.7 bios file free
          -fix error for ps3 emulatorx v1.1.7 bios file free
          -update version for ps3 emulatorx v1.1.7 bios file free
          -latest version of ps3 emulatorx v1.1.7 bios file free
          -new release of ps3 emulatorx v1.1.7 bios file free
          -review of ps3 emulatorx v1.1.7 bios file free
          -tutorial of ps3 emulatorx v1.1.7 bios file free
          -guide of ps3 emulatorx v1.1.7 bios file free
          -tips and tricks of ps3 emulatorx v1.1.7 bios file free
          -features of ps3 emulatorx v1.1.7 bios file free
          -benefits of ps3 emulatorx v1.1.7 bios file free
          -advantages of ps3 emulatorx v1.1.7 bios file free
          -disadvantages of ps3 emulatorx v1.1.7 bios file free
          -alternatives of ps3 emulatorx v1.1.7 bios file free
          -competitors of ps3 emulatorx v1.1.7 bios file free
          -comparison of ps3 emulatorx v1.1.7 bios file free with other emulators
          -compatibility of ps3 emulatorx v1.1.7 bios file free with different games
          -performance of ps3 emulatorx v1.1.7 bios file free on different devices
          -quality of ps3 emulatorx v1.1.7 bios file free on different resolutions
          -speed of ps3 emulatorx v

          -

          Without the BIOS file, the emulator will not work properly and will show an error message when you try to load a game.

          -

          Therefore, you need to obtain the BIOS file from a reliable source and install it on your PC before using the emulator.

          -

          Here is a step-by-step guide on how to download and install PS3 Emulatorx v1.1.7 bios file for free:

          -
            -
          1. Go to this website, which offers a free download link for the BIOS file.
          2. -
          3. Click on the "Download" button and wait for a few seconds until a new window opens.
          4. -
          5. Click on the "Download File" button again and wait for another few seconds until the download starts.
          6. -
          7. Save the file in a folder of your choice on your PC.
          8. -
          9. Extract the file using WinRAR or 7-Zip software.
          10. -
          11. You will get a folder named "PS3 Emulator Bios" that contains two files: "PSX Bios.bin" and "PSX Bios.scph1000.bin".
          12. -
          13. Copy both files and paste them in the same folder where you installed PS3 Emulatorx v1.1.7.
          14. -
          15. You have successfully installed the BIOS file for PS3 Emulatorx v1.1.7.
          16. -
          -

          How to use PS3 Emulatorx v1.1.7 to play PS3 games on PC?

          -

          After installing the BIOS file for PS3 Emulatorx v1.1.7, you are ready to use it to play PS3 games on your PC.

          -

          All you need is an ISO file that contains the image of the game disc that you want to play.

          -

          You can either create an ISO file from your own disc using software like ImgBurn or PowerISO, or download an ISO file from online sources like RomsMania, EmuParadise, or CoolROM.

          -

          Make sure that the ISO file is compatible with PS3 Emulatorx v1.1.7 and does not contain any viruses or malware.

          -

          Here is how to use PS3 Emulatorx v1.1.7 to play PS3 games on PC:

          -
        21. Click on the "File" menu and select "Open" to browse for your ISO file.
        22. -
        23. Select your ISO file and click on "Open" to load it on the emulator.
        24. -
        25. A window will appear with some information about the game, such as its name, region, size, and format.
        26. -
        27. Click on the "Run" button to start playing the game.
        28. -
        29. You can use your keyboard and mouse or any controller of your choice to control the game.
        30. -
        31. You can also access the emulator's settings and options by clicking on the "Config" menu.
        32. -
        33. You can adjust various parameters such as graphics, sound, input, language, and more.
        34. -
        35. You can also save and load your game progress by using the emulator's save states feature.
        36. -
        -

        What are some of the best PS3 games that you can play with PS3 Emulatorx v1.1.7?

        -

        PS3 Emulatorx v1.1.7 supports a wide range of PS3 games, including some exclusive titles that are not available on other platforms.

        -

        Some of these games are considered to be among the best PS3 games ever made, and they offer a variety of genres, gameplay, and features.

        -

        Here are some of the best PS3 games that you can play with PS3 Emulatorx v1.1.7:

        - | Game | Genre | Gameplay | Features | | --- | --- | --- | --- | | God of War III | Action-adventure | Hack and slash | Epic story, brutal combat, stunning graphics | | Uncharted 2: Among Thieves | Action-adventure | Third-person shooter | Cinematic story, thrilling action, online multiplayer | | The Last of Us | Action-adventure | Survival horror | Emotional story, immersive gameplay, stealth mechanics | | Metal Gear Solid 4: Guns of the Patriots | Action-adventure | Stealth | Complex story, tactical gameplay, online multiplayer | | Grand Theft Auto V | Action-adventure | Open world | Massive map, diverse missions, online multiplayer | | Red Dead Redemption | Action-adventure | Open world | Western setting, realistic gameplay, online multiplayer | | Demon's Souls | Action role-playing | Hack and slash | Dark atmosphere, challenging gameplay, online multiplayer | | Dark Souls | Action role-playing | Hack and slash | Dark atmosphere, challenging gameplay, online multiplayer | | Persona 5 | Role-playing | Turn-based combat | Stylish graphics, engaging story, social simulation | | Final Fantasy XIII | Role-playing | Turn-based combat | Sci-fi fantasy setting, linear gameplay, cinematic presentation |

        The table above compares the original PS3 version and the emulated version of each game using PS3 Emulatorx v1.1.7.

        - | Game | Original PS3 Version | Emulated Version | | --- | --- | --- | | God of War III | 720p resolution, 30 fps frame rate, 8 GB disc size | 1080p resolution, 60 fps frame rate, 40 GB ISO size | | Uncharted 2: Among Thieves | 720p resolution, 30 fps frame rate, 25 GB disc size | 1080p resolution, 60 fps frame rate, 20 GB ISO size | | The Last of Us | 720p resolution, 30 fps frame rate, 27 GB disc size | 1080p resolution, 60 fps frame rate, 35 GB ISO size | | Metal Gear Solid 4: Guns of the Patriots | 720p resolution, 30 fps frame rate, 50 GB disc size | 1080p resolution, 60 fps frame rate, 30 GB ISO size | | Grand Theft Auto V | 720p resolution, 30 fps frame rate, 18 GB disc size | 1080p resolution, 60 fps frame rate, 17 GB ISO size | | Red Dead Redemption | 720p resolution, 30 fps frame rate, 10 GB disc size | 1080p resolution, 60 fps frame rate, 8 GB ISO size |

        How to troubleshoot common issues and errors with PS3 Emulatorx v1.1.7?

        -

        PS3 Emulatorx v1.1.7 is a great software that allows you to play PS3 games on your PC, but it is not perfect and it may encounter some issues and errors from time to time.

        -

        Some of these issues and errors are related to the BIOS file, the ISO file, the emulator settings, or your PC specifications.

        -

        Here are some of the common issues and errors that you may face with PS3 Emulatorx v1.1.7 and how to fix them:

        - - BIOS file error: This error occurs when you try to run a game without installing the BIOS file or when you install a wrong or corrupted BIOS file. To fix this error, you need to download and install the correct BIOS file for PS3 Emulatorx v1.1.7 from a reliable source, as explained in the previous section. - ISO file error: This error occurs when you try to run a game with an incompatible or damaged ISO file. To fix this error, you need to check if your ISO file is compatible with PS3 Emulatorx v1.1.7 and does not contain any viruses or malware. You can also try to create or download a new ISO file from a different source. - Emulator settings error: This error occurs when you try to run a game with inappropriate or incorrect emulator settings. To fix this error, you need to adjust your emulator settings according to your game's requirements and your PC's specifications. You can also try to reset your emulator settings to default or use a preset configuration that suits your game. - PC specifications error: This error occurs when you try to run a game with a PC that does not meet the minimum requirements for PS3 Emulatorx v1.1.7. To fix this error, you need to upgrade your PC's hardware components such as RAM, CPU, GPU, or HDD. You can also try to close any unnecessary programs or processes that may be consuming your PC's resources.

        Conclusion

        -

        PS3 Emulatorx v1.1.7 is a powerful and versatile software that allows you to play PS3 games on your PC with high compatibility and performance.

        -

        By using PS3 Emulatorx v1.1.7, you can enjoy many benefits such as saving money, space, and time, as well as having more control and fun over your gaming experience.

        -

        To use PS3 Emulatorx v1.1.7, you need to download and install its BIOS file for free from a reliable source, as well as obtain an ISO file for each game that you want to play.

        -

        You also need to configure your emulator settings and options according to your game's requirements and your PC's specifications.

        -

        If you encounter any issues or errors with PS3 Emulatorx v1.1.7, you can troubleshoot them by following some simple steps and solutions.

        -

        If you are a fan of PS3 games and want to enjoy them on your PC, you should definitely try out PS3 Emulatorx v1.1.7 and see for yourself how amazing it is.

        -

        You can download PS3 Emulatorx v1.1.7 bios file for free from this website and start playing your favorite PS3 games on your PC today.

        -

        FAQs

        -
          -
        • Q: Is PS3 Emulatorx v1.1.7 legal?
          A: PS3 Emulatorx v1.1.7 is legal as long as you use it for personal and non-commercial purposes only. However, downloading or distributing copyrighted games without permission is illegal and may result in legal consequences.
        • -
        • Q: Is PS3 Emulatorx v1.1.7 safe?
          A: PS3 Emulatorx v1.1.7 is safe as long as you download it from a trusted source and scan it for viruses or malware before installing it on your PC.
        • -
        • Q: Is PS3 Emulatorx v1.1.7 free?
          A: PS3 Emulatorx v1.1.7 is free and does not require any registration or payment to use it.
        • -
        • Q: How can I update PS3 Emulatorx v1.1.7?
          A: You can update PS3 Emulatorx v1.1.7 by downloading the latest version from its official website or by checking for updates within the emulator itself.
        • -.7 by visiting their website or by sending them an email at ps3emulatorx@gmail.com. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Arceus X 2.0.11 APK The Ultimate Roblox Mod MenuExploit.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Arceus X 2.0.11 APK The Ultimate Roblox Mod MenuExploit.md deleted file mode 100644 index 282b312c4f2c177eecb6d3622d63da2a150b5605..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Arceus X 2.0.11 APK The Ultimate Roblox Mod MenuExploit.md +++ /dev/null @@ -1,106 +0,0 @@ - -

        Arceus X v 2.0.11 APK Download: A Roblox Mod Menu for Android

        -

        Are you a fan of Roblox and want to enhance your gaming experience with some cool features? Do you want to run scripts on your Android device and exploit your favorite games? If yes, then you should download Arceus X v 2.0.11 APK, a Roblox mod menu that allows you to do all that and more.

        -

        arceus x v 2.0.11 apk download


        Download Ziphttps://bltlly.com/2uOohg



        -

        Arceus X is a first and one of the most widely used Roblox mod menu/exploit specially developed for Android devices. It allows you to use features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. Arceus X APK is developed by a team of professional developers who are constantly updating and improving it to make it the best Roblox mod menu for Android. Arceus X APK is free to download and easy to use, and it works with most of the Roblox games. In this article, we will show you the features, how to download and install, pros and cons, alternatives, and FAQs of Arceus X v 2.0.11 APK. Let's get started!

        -

        Features of Arceus X v 2.0.11 APK

        -

        Arceus X v 2.0.11 APK has many features that make it a powerful and versatile Roblox mod menu for Android. Here are some of the main features that you can enjoy with Arceus X APK:

        -
          -
        • Android LuaU Execution: This feature allows you to run scripts on your Android device without the need of a PC or an emulator. You can execute any script that is compatible with Android LuaU, which is the scripting engine used by Roblox on mobile devices. You can also use the built-in code editor to write your own scripts or modify existing ones.
        • -
        • Infinite Jump: This feature allows you to fly over obstacles and enemies in any game that has jumping mechanics. You can jump as high and as long as you want, and you can also control your direction and speed in the air. This feature is useful for escaping danger, reaching hidden areas, or exploring the map.
        • -
        • Super Speed: This feature allows you to move faster than ever in any game that has movement mechanics. You can increase or decrease your speed as you wish, and you can also toggle between walking and running modes. This feature is useful for saving time, dodging attacks, or outrunning opponents.
        • -
        • Btools: This feature allows you to modify the game environment in any game that has building mechanics. You can delete, clone, move, resize, rotate, or color any object in the game world, and you can also create your own objects from scratch. This feature is useful for creating custom maps, destroying obstacles, or trolling other players.
        • -
        • Script Hub: This feature allows you to access hundreds of scripts for different games that are made by other users or by the team of Arceus X. You can browse through various categories such as admin commands, game modes, hacks, cheats, etc., and you can also search for specific scripts by name or keyword. You can also save your favorite scripts for easy access later.
        • -
        • More!: This feature allows you to discover other amazing features that are not listed here, such as ESP (Extra Sensory Perception), Aimbot, God Mode, Noclip, Teleport, etc. You can also customize the user interface of Arceus X APK according to your preference, such as changing the theme color, font size, button layout, etc.
        • -
        -

        How to Download and Install Arceus X v 2.0.11 APK

        -

        Downloading and installing Arceus X v 2.0.11 APK is very simple and straightforward. Just follow these steps:

        -

        arceus x v 2.0.11 apk latest version
        -arceus x v 2.0.11 apk mod menu
        -arceus x v 2.0.11 apk roblox exploit
        -arceus x v 2.0.11 apk free download
        -arceus x v 2.0.11 apk android
        -arceus x v 2.0.11 apk no verification
        -arceus x v 2.0.11 apk update
        -arceus x v 2.0.11 apk script hub
        -arceus x v 2.0.11 apk tutorial
        -arceus x v 2.0.11 apk review
        -arceus x v 2.0.11 apk how to use
        -arceus x v 2.0.11 apk features
        -arceus x v 2.0.11 apk download link
        -arceus x v 2.0.11 apk for ios
        -arceus x v 2.0.11 apk beta
        -arceus x v 2.0.11 apk unlimited jump
        -arceus x v 2.0.11 apk super speed
        -arceus x v 2.0.11 apk btools
        -arceus x v 2.0.11 apk lua executor
        -arceus x v 2.0.11 apk install
        -arceus x v 2.0.11 apk safe
        -arceus x v 2.0.11 apk legit
        -arceus x v 2.0.11 apk working
        -arceus x v 2.0.11 apk no root
        -arceus x v 2.0.11 apk no ads
        -arceus x v 2.0.11 apk direct download
        -arceus x v 2.0.11 apk mirror
        -arceus x v 2.0.11 apk mediafire
        -arceus x v 2.0.11 apk mega.nz
        -arceus x v 2.0.11 apk google drive
        -arceus x v 2.0.11 apk youtube video
        -arceus x v 2.0.11 apk reddit post
        -arceus x v 2.0.11 apk discord server
        -arceus x v 2.0.11 apk official website
        -arceus x v 2.0.11 apk support team
        -arceus x v 2.0.11 apk feedback form
        -arceus x v 2.0.11 apk bug report
        -arceus x v 2.0

        -
          -
        1. Step 1: Download the APK file from a trusted source. You can download it from the official website of Arceus X or from their YouTube channel. Make sure you download the latest version of Arceus X APK which is v 2.0.11 as of now.
        2. -
        3. Step 2: Enable unknown sources on your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Google Play Store.
        4. -
        5. Step 3: Install the APK file and launch the app. To do this, locate the downloaded APK file on your device storage and tap on it. Follow the instructions on the screen to complete the installation process. Once done, open the app and grant it the necessary permissions.
        6. -
        7. Step 4: Get the verification key and start exploiting Roblox. To do this, tap on the "Get Key" button on the app and follow the instructions on how to get the verification key from their website. The verification key is required to use Arceus X APK and it changes every day for security reasons. Once you have the verification key, enter it on the app and tap on "Verify". Now you can start exploiting Roblox with Arceus X APK!
        8. -
        -

        Pros and Cons of Arceus X v 2.0.11 APK

        -

        Arceus X v 2. 0.11 APK has its own advantages and disadvantages that you should be aware of before using it. Here are some of the pros and cons of Arceus X APK:

        -
          -
        • Pros: -
            -
          • It is free to download and use, unlike some other Roblox mod menus that require payment or subscription.
          • -
          • It is easy to use, with a simple and intuitive user interface that guides you through the process of exploiting Roblox.
          • -
          • It is powerful, with a wide range of features that can help you dominate any game you want.
          • -
          • It is updated, with new features and bug fixes being added regularly by the team of Arceus X.
          • -
          • It is safe, with no viruses or malware that can harm your device or your Roblox account.
          • -
          -
        • -
        • Cons: -
            -
          • It requires verification, which means you have to get a new key every day from their website to use Arceus X APK.
          • -
          • It may crash or lag, depending on your device performance and internet connection.
          • -
          • It may get banned by Roblox, if you abuse the features or get reported by other players. Roblox has a strict anti-exploit policy that can detect and ban users who use mod menus or exploits.
          • -
          -
        • -
        -

        Alternatives to Arceus X v 2.0.11 APK

        -

        If you are looking for some alternatives to Arceus X v 2.0.11 APK, here are some of the best ones that you can try:

        -
          -
        • Hydrogen: This is a new Roblox mobile executor that has more features than Arceus X APK, such as Auto Update, Auto Execute, Custom Keybinds, Custom Themes, etc. It also has a script hub with over 1000 scripts for different games. Hydrogen is compatible with Android 7.0 and above, and it is free to download and use.
        • -
        • JJsploit: This is a popular Roblox executor for Windows PC that has been around for a long time. It has many features such as Lua C Execution, Click Teleport, Speed Hack, Fly Hack, Noclip, etc. It also has a script hub with over 500 scripts for different games. JJsploit is compatible with Windows 7 and above, and it is free to download and use.
        • -
        • KRNL: This is a free and stable Roblox executor for Windows PC that has been praised by many users. It has many features such as Lua Execution, Drawing Library, Debug Library, HTTP Library, etc. It also has a script hub with over 1000 scripts for different games. KRNL is compatible with Windows 7 and above, and it is free to download and use.
        • -
        -

        Conclusion

        -

        In conclusion, Arceus X v 2.0.11 APK is a Roblox mod menu for Android that allows you to run scripts on your device and exploit your favorite games. It has many features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. It is free to download and easy to use, but it also has some drawbacks such as requiring verification, crashing or lagging sometimes, and getting banned by Roblox. If you are looking for some alternatives to Arceus X APK, you can try Hydrogen, JJsploit, or KRNL. We hope this article was helpful for you and answered all your questions about Arceus X v 2.0.11 APK. Happy exploiting!

        -

        FAQs

        -

        Here are some of the frequently asked questions about Arceus X v 2.0.11 APK:

        -
          -
        • Q1: Is Arceus X v 2.0.11 APK safe to use?
        • -
        • A1: Yes, it is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, there is always a risk of getting banned by Roblox if you abuse the features or get reported by other players. Use it at your own discretion and responsibility.
        • -
        • Q2: How do I update Arceus X v 2.0.11 APK?
        • -
        • A2: You can check for updates on the official website of Arceus X or on their YouTube channel. You can also join their Discord server to get notified of new updates and support.
        • -
        • Q3: What are some of the best scripts to use with Arceus X v 2.0.11 APK?
        • -
        • A3: There are many scripts to choose from depending on your preference and the game you want to exploit. Some of the most popular scripts are Adopt Me Script, Anime Adventures Script, Bedwars Script, Blox Fruit Script, Brookhaven RP Script, Doors Script, Livetopia Script, My restaurant Script, Pet Simulator X Script, etc. You can find them on the script hub of Arceus X or on other websites like Pastebin or V3rmillion.
        • -
        • Q4: Can I use Arceus X v 2.0.11 APK on iOS or Mac?
        • -
        • A4: No, Arceus X v 2.0.11 APK is only compatible with Android devices. However, the team of Arceus X are working on developing an iOS and Mac version of Arceus X soon. Stay tuned for more updates on their website or social media platforms.
        • -
        • Q5: Can I request a feature or report a bug for Arceus X v 2.0.11 APK?
        • -
        • A5: Yes, you can request a feature or report a bug for Arceus X v 2.0.11 APK by contacting them through their Discord server or email address. They are always open to feedback and suggestions from their users.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bhop Expert vs Bhop Pro Which One is Better for PC Gamers?.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bhop Expert vs Bhop Pro Which One is Better for PC Gamers?.md deleted file mode 100644 index ac9473f9d03b175fda05e401df2b9cf303f0164c..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bhop Expert vs Bhop Pro Which One is Better for PC Gamers?.md +++ /dev/null @@ -1,210 +0,0 @@ -
        -

        Bhop Pro PC No Download: How to Play the Best Bunny Hop Game on Your Computer

        -

        If you are a fan of first-person shooter (FPS) games or speedrunning, you might have heard of bhop pro, a mobile game that lets you jump and bunny hop in various maps and modes. But did you know that you can also play bhop pro on your PC without downloading anything? In this article, we will show you how to do that, as well as some tips and tricks to master the game. We will also share some reviews and ratings from other players who have tried bhop pro on PC.

        -

        What is Bhop Pro?

        -

        A brief introduction to the game and its features

        -

        Bhop pro is a simulation game developed by begma that allows you to practice your bunny hopping skills in different environments. Bunny hopping is a technique that involves jumping and strafing in the air to gain more speed and momentum. It is often used in FPS games like Counter-Strike or Half-Life to move faster and dodge enemies.

        -

        bhop pro pc no download


        Download Filehttps://bltlly.com/2uOqrO



        -

        Bhop pro has several features that make it an enjoyable and challenging game for bhop enthusiasts. Some of these features are:

        -
          -
        • Simple and accessible touch controls
        • -
        • Dynamic movements with realistic in-game physics
        • -
        • Multiple game modes to try out, such as parkour, surf, deathrun, speedrun, and random
        • -
        • Compete and increase your ranks by completing quests and earning medals
        • -
        • Various maps with interesting setups and obstacles
        • -
        • Feel free to customize your character with different outfits and accessories
        • -
        • Awesome boost cases and unlockable items, such as knives, gloves, spinners, skins, etc.
        • -
        • Have fun sharing your awesome in-game moments with screenshots
        • -
        -

        Why is it popular among FPS fans and speedrunners?

        -

        Bhop pro is popular among FPS fans and speedrunners because it offers a fun and challenging way to improve their bhop skills. Bhop pro can help them learn how to control their movement and speed in different situations, as well as how to adapt to different maps and modes. Bhop pro can also help them compete with other players from around the world and see how they rank among the best bhop players. Bhop pro is also a great game for those who just want to have fun and relax with a casual and addictive game.

        -

        How to Play Bhop Pro on PC without Downloading Anything

        -

        Option 1: Play online at Y8.com

        -

        The pros and cons of this option

        -

        One of the easiest ways to play bhop pro on PC without downloading anything is to play it online at Y8.com, a website that hosts thousands of free online games. This option has some advantages and disadvantages, such as:

        - - - - - - - - - - - - - - - - - -
        ProsCons
        No need to download or install anythingRequires an internet connection and a browser that supports WebGL
        Fast and simple to access and playLimited features and options compared to the mobile version
        Free to play without any ads or in-app purchasesNo account or progress saving, no customization or items, no quests or ranks
        -

        How to access and play the game

        -

        To access and play bhop pro online at Y8.com, you just need to follow these steps:

        -
          -
        1. Go to https://www.y8.com/games/bhop_pro_online
        2. -
        3. Click on the "Play" button to start the game
        4. -
        5. Select your preferred map and mode from the menu
        6. -
        7. Use the arrow keys or WASD keys to move, spacebar to jump, and mouse to look around
        8. -
        9. Enjoy the game and try to beat your own records or challenge other players online
        10. -
        -

        Option 2: Use an emulator like BlueStacks

        -

        The pros and cons of this option

        -

        Another way to play bhop pro on PC without downloading anything is to use an emulator like BlueStacks, a software that allows you to run Android apps on your PC. This option also has some advantages and disadvantages, such as:

        -

        bhop pro online free
        -bhop pro browser game
        -bhop pro webgl
        -bhop pro html5
        -bhop pro y8.com
        -bhop pro simulation
        -bhop pro bluestacks
        -bhop pro emulator
        -bhop pro windows pc
        -bhop pro appsonwindows.com
        -bhop pro bunnyhopping skills
        -bhop pro surf stages
        -bhop pro mod cards
        -bhop pro fun walkthrough
        -bhop pro action adventure
        -bhop expert game
        -bhop expert play online
        -bhop expert software update
        -bhop expert webgl missing
        -bhop expert game controls
        -bhop expert report a bug
        -bhop expert tags
        -bhop expert add this game to your web page
        -bhop expert y8 forum
        -bhop expert cryptoserval game
        -bhop expert game details
        -bhop expert enjoy the game or educate
        -bhop expert no victories and no winners
        -begma bhop pro
        -begma simulation game
        -begma latest version of bhop pro
        -begma downloads and rating of bhop pro
        -begma update on 2021 07 17
        -begma estimated number of downloads is more than 10 million
        -begma overall rating of bhop pro is 4.3
        -how to play bhop pro on pc without download
        -best sites to play bhop pro online for free
        -tips and tricks for bhop pro pc no download
        -reviews and ratings of bhop pro pc no download games
        -alternatives to bhop pro pc no download games

        - - - - - - - - - - - - - - - - - -
        ProsCons
        No need to download the game itself, only the emulatorRequires a PC that meets the minimum system requirements for the emulator
        Full access to all the features and options of the mobile versionMight experience some lag or performance issues depending on your PC specs and internet speed
        Ability to create an account and save your progress, customize your character and items, complete quests and earn ranks, etc.Might encounter some compatibility or security issues depending on the emulator and the game version
        -

        How to install and use the emulator

        -

        To install and use BlueStacks to play bhop pro on PC, you just need to follow these steps:

        -
          -
        1. Go to https://www.bluestacks.com/
        2. -
        3. Click on the "Download BlueStacks" button and follow the instructions to install the emulator on your PC
        4. Launch BlueStacks and sign in with your Google account
        5. Search for bhop pro in the Google Play Store app and install it
        6. Open bhop pro from the app drawer and enjoy the game
        7. You can also customize your keyboard and mouse settings for better control and comfort
        -

        Tips and Tricks to Master Bhop Pro on PC

        -

        How to control your movement and speed

        -

        The key to mastering bhop pro is to learn how to control your movement and speed in the air. Here are some tips and tricks that can help you with that:

        -
          -
        • Avoid holding down the jump button. Instead, tap it right before you land on the ground. This will help you maintain your momentum and avoid losing speed.
        • -
        • Strafe in the air by pressing either left or right while moving your mouse in the same direction. This will help you gain more speed and change direction.
        • -
        • Aim for smooth surfaces and avoid hitting walls or obstacles. This will help you avoid losing speed or bouncing off unpredictably.
        • -
        • Use ramps, slopes, or edges to launch yourself higher or farther. This will help you reach new areas or shortcuts.
        • -
        • Experiment with different combinations of jumps, strafes, and turns. This will help you find the best routes and strategies for each map and mode.
        • -
        -

        How to choose the best maps and modes

        -

        Bhop pro has a variety of maps and modes that you can choose from, depending on your preference and skill level. Here are some tips and tricks that can help you choose the best ones for you:

        -
          -
        • Start with the easy maps and modes, such as parkour or random. These will help you get familiar with the game mechanics and controls, as well as the basic bhop techniques.
        • -
        • Progress to the harder maps and modes, such as surf or deathrun. These will challenge you with more complex and difficult setups and obstacles, as well as more competitive and skilled opponents.
        • -
        • Try out the different themes and styles of the maps, such as city, desert, forest, space, etc. These will give you a different visual and auditory experience, as well as a different gameplay feel.
        • -
        • Explore the hidden secrets and easter eggs of the maps, such as secret passages, hidden items, or references to other games or media. These will add more fun and excitement to your game, as well as reward you with some surprises.
        • -
        • Vote for your favorite maps and modes, or suggest new ones to the developers. This will help improve the game quality and variety, as well as show your support and feedback to the game creators.
        • -
        -

        How to customize your character and items

        -

        Bhop pro also allows you to customize your character and items with different outfits and accessories. Here are some tips and tricks that can help you with that:

        -
          -
        • Earn coins by playing the game, completing quests, or watching ads. You can use these coins to buy boost cases or items from the shop.
        • -
        • Open boost cases to get random items, such as knives, gloves, spinners, skins, etc. You can also sell or trade your items with other players.
        • -
        • Equip your items from the inventory menu. You can also preview how they look on your character before equipping them.
        • -
        • Express your personality and style with your items. You can mix and match different items to create your own unique look.
        • -
        • Show off your items to other players in the game. You can also admire or compliment other players' items.
        • -
        -

        Bhop Pro Reviews and Ratings

        -

        What do other players think about the game?

        -

        Bhop pro has received mostly positive reviews and ratings from other players who have tried it on PC. Here are some of their comments:

        -
        "This game is awesome! I love how it simulates bhop in a realistic way. It's very addictive and fun to play."
        -
        "I've been playing bhop pro for a while now and I can say that it's one of the best bhop games out there. It has a lot of features and options that make it enjoyable and challenging."
        -
        "Bhop pro is a great game for bhop lovers. It has a lot of maps and modes to choose from, as well as a lot of customization and items. It's also very easy to play on PC without downloading anything."
        -

        How does it compare to other bhop games?

        -

        Bhop pro is not the only bhop game available on PC, but it is one of the best ones in terms of quality and variety. Here are some of the ways it compares to other bhop games:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        Bhop ProOther Bhop Games
        No download required (online or emulator)Download required (Steam or other platforms)
        Free to play without ads or in-app purchasesSome are free, some are paid, some have ads or in-app purchases
        Full access to all features and options of the mobile versionLimited or different features and options depending on the game version
        Multiple game modes to try out (parkour, surf, deathrun, speedrun, random)Single or few game modes available (usually parkour or surf)
        Various maps with different themes and styles (city, desert, forest, space, etc.)Fewer maps with similar themes and styles (usually urban or industrial)
        A lot of customization and items (knives, gloves, spinners, skins, etc.)Limited or no customization or items available
        Compete and increase ranks by completing quests and earning medalsNo competition or ranking system available
        -

        Conclusion

        -

        A summary of the main points and a call to action

        -

        Bhop pro is a simulation game that lets you practice your bunny hopping skills in different maps and modes. It is popular among FPS fans and speedrunners who want to improve their bhop skills and compete with other players. You can play bhop pro on PC without downloading anything by using either an online website or an emulator. You can also customize your character and items, as well as enjoy the various features and options of the game. Bhop pro has received mostly positive reviews and ratings from other players who have tried it on PC, and it is one of the best bhop games available in terms of quality and variety.

        -

        If you are interested in playing bhop pro on PC, you can try it out for yourself by following the instructions in this article. You can also check out the official website or social media pages of the game for more information and updates. Bhop pro is a fun and challenging game that will test your bhop skills and keep you entertained for hours. So what are you waiting for? Start bunny hopping today!

        -

        FAQs

        -

        Q1: What is bunny hop?

        -

        A1: Bunny hop is a technique that involves jumping and strafing in the air to gain more speed and momentum. It is often used in FPS games like Counter-Strike or Half-Life to move faster and dodge enemies.

        -

        Q2: Is Bhop Pro free to play?

        -

        A2: Yes, bhop pro is free to play without any ads or in-app purchases. You can play it online or use an emulator to play it on PC without downloading anything.

        -

        Q3: Can I play Bhop Pro with friends?

        -

        A3: Yes, you can play bhop pro with friends online by joining the same server or creating your own private room. You can also chat with other players in the game.

        -

        Q4: What are the system requirements for Bhop Pro on PC?

        -

        A4: The system requirements for bhop pro on PC depend on the option you choose to play it. If you play it online, you need an internet connection and a browser that supports WebGL. If you use an emulator, you need a PC that meets the minimum system requirements for the emulator, such as BlueStacks.

        -

        Q5: Where can I find more information about Bhop Pro?

        -

        A5: You can find more information about bhop pro by visiting the official website or social media pages of the game, such as:

        -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ashampoo Magical Security 2.02 (Portable).torrent.md b/spaces/tioseFevbu/cartoon-converter/scripts/Ashampoo Magical Security 2.02 (Portable).torrent.md deleted file mode 100644 index a10c898eb4d97c4c70ae2e59bce3a7cb117b2d65..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ashampoo Magical Security 2.02 (Portable).torrent.md +++ /dev/null @@ -1,78 +0,0 @@ - - - - - -
        -

        Ashampoo Magical Security 2.02 (Portable): A Review

        -

        Introduction

        -

        If you are looking for a simple and effective way to protect your private files and folders from unauthorized access, theft, or loss, you might want to consider using Ashampoo Magical Security 2.02 (Portable). This is a software that allows you to encrypt and decrypt your data using a strong algorithm and store it on a USB drive or a CD/DVD.

        -

        Encryption is a process of transforming readable data into unreadable data using a secret key. Only those who have the correct key can decrypt the data and access its original content. Encryption can help you secure your sensitive information from hackers, spies, or anyone who might want to harm you or misuse your data.

        -

        Ashampoo Magical Security 2.02 (Portable).torrent


        Download Zip 🆗 https://urlcod.com/2uHxsO



        -

        Ashampoo Magical Security 2.02 (Portable) is a software that offers several benefits for encrypting your data:

        -
          -
        • It is portable, which means you can run it directly from a USB drive or a CD/DVD without installing it on your PC.
        • -
        • It is easy to use, which means you can encrypt and decrypt your files and folders with just a few clicks.
        • -
        • It is fast and reliable, which means you can encrypt and decrypt large amounts of data in a short time without compromising its quality or integrity.
        • -
        • It is compatible with Windows XP, Vista, 7, 8, and 10.
        • -
        -

        Ashampoo Magical Security 2.02 (Portable) has several features that make it stand out from other encryption software:

        -
          -
        • It supports multiple encryption modes, such as AES 128, AES 192, and AES 256, which are considered very secure and widely used by governments and organizations.
        • -
        • It allows you to create self-extracting EXE archives that can be opened with a password on any PC, without requiring the software to be installed.
        • -
        • It enables you to burn encrypted files to CD or DVD, which can be accessed with a password on any PC that has a CD/DVD drive.
        • -
        • It lets you delete files securely, using various methods of overwriting the data to prevent any recovery.
        • -
        -

        As you can see, Ashampoo Magical Security 2.02 (Portable) is a powerful and convenient tool for encrypting your data and keeping it safe. But how do you download and use it? Let's find out in the next section.

        -

        How to download and use Ashampoo Magical Security 2.02 (Portable)?

        -

        To download and use Ashampoo Magical Security 2.02 (Portable), you need to follow these steps:

        -
          -
        1. Find and download the torrent file for Ashampoo Magical Security 2.02 (Portable). You can use any torrent client, such as uTorrent, BitTorrent, or qBittorrent, to download the torrent file from various sources on the internet. Make sure you download the file from a trusted and verified source, and scan it for viruses or malware before opening it.
        2. -
        3. Open and run the portable software from the torrent file. Once you have downloaded the torrent file, you can open it with your torrent client and extract the portable software to a USB drive or a CD/DVD. Alternatively, you can mount the ISO image of the software using a virtual drive software, such as Daemon Tools or PowerISO, and run it from there.
        4. -
        5. Encrypt and decrypt files and folders using the portable software. To encrypt your files and folders, you need to launch the portable software and select the "Encrypt" option from the main menu. Then, you need to choose the files and folders you want to encrypt, select the encryption mode and password, and click on "Start". The software will encrypt your data and save it in a new folder with the extension ".ams". To decrypt your files and folders, you need to select the "Decrypt" option from the main menu, choose the encrypted folder, enter the password, and click on "Start". The software will decrypt your data and restore it to its original state.
        6. -
        7. Burn encrypted files to CD or DVD using the portable software. To burn your encrypted files to CD or DVD, you need to select the "Burn" option from the main menu. Then, you need to choose the encrypted folder, select the CD/DVD drive, enter the password, and click on "Start". The software will burn your encrypted data to a disc that can be accessed with a password on any PC that has a CD/DVD drive.
        8. -
        9. Delete files securely using the portable software. To delete your files securely, you need to select the "Delete" option from the main menu. Then, you need to choose the files or folders you want to delete, select the deletion method (such as Gutmann or DoD), and click on "Start". The software will overwrite your data multiple times with random patterns to prevent any recovery.
        10. -
        -

        As you can see, using Ashampoo Magical Security 2.02 (Portable) is very easy and straightforward. But what are the pros and cons of using this software? Let's find out in the next section.

        -
        - : https://www.ashampoo.com/en/usd/pin/0056/security-software/Ashampoo-Magical-Security-2 : https://www.softpedia.com/get/Security/Encrypting/Ashampoo-Magical-Security.shtml : https://www.lifewire.com/how-to-download-torrents-2487180 : https://www.wikihow.com/Run-Software-Directly-Off-a-USB-Flash-Drive : https://www.manualslib.com/manual/233638/Ashampoo-Magical-Security-2.html

        Pros and cons of Ashampoo Magical Security 2.02 (Portable)

        -

        Ashampoo Magical Security 2.02 (Portable) is a software that has many advantages and disadvantages for encrypting your data. Here are some of them:

        -

        Pros

        -
          -
        • It is portable, which means you can carry it with you on a USB drive or a CD/DVD and use it on any PC without installing it.
        • -
        • It is easy to use, which means you can encrypt and decrypt your files and folders with just a few clicks and a password.
        • -
        • It is fast and reliable, which means you can encrypt and decrypt large amounts of data in a short time without compromising its quality or integrity.
        • -
        • It supports multiple encryption modes, such as AES 128, AES 192, and AES 256, which are considered very secure and widely used by governments and organizations.
        • -
        • It allows you to create self-extracting EXE archives that can be opened with a password on any PC, without requiring the software to be installed.
        • -
        • It enables you to burn encrypted files to CD or DVD, which can be accessed with a password on any PC that has a CD/DVD drive.
        • -
        • It lets you delete files securely, using various methods of overwriting the data to prevent any recovery.
        • -
        -

        Cons

        -
          -
        • It is not free, which means you have to pay for the full version of the software or use a free trial version with limited features.
        • -
        • It is not compatible with Mac, Linux, or mobile devices, which means you cannot use it on other platforms or devices.
        • -
        • It may not be legal or ethical to use it for some purposes, such as pirating, hacking, or spying, which means you have to be careful and responsible when using it.
        • -
        • It may not be enough to protect your data from some threats, such as physical damage, loss, or theft of your USB drive or CD/DVD, which means you have to take extra precautions and backups when storing your data.
        • -
        -

        As you can see, Ashampoo Magical Security 2.02 (Portable) has its pros and cons for encrypting your data. You have to weigh them carefully and decide whether this software is suitable for your needs and preferences. In the next section, we will conclude the article and give you our recommendation.

        -

        -

        Conclusion

        -

        In this article, we have reviewed Ashampoo Magical Security 2.02 (Portable), a software that allows you to encrypt and decrypt your data using a strong algorithm and store it on a USB drive or a CD/DVD. We have discussed what this software is, what are the benefits of using it, what are the main features of it, how to download and use it, and what are the pros and cons of using it.

        -

        We think that Ashampoo Magical Security 2.02 (Portable) is a good software for encrypting your data and keeping it safe. It is portable, easy to use, fast and reliable, and supports multiple encryption modes. It also allows you to create self-extracting EXE archives, burn encrypted files to CD or DVD, and delete files securely. However, it is not free, not compatible with other platforms or devices, not legal or ethical for some purposes, and not enough to protect your data from some threats. Therefore, you have to consider these factors before using this software.

        -

        If you are interested in trying Ashampoo Magical Security 2.02 (Portable), you can download a free trial version from the official website or from Softpedia. You can also find the torrent file for this software from various sources on the internet, but make sure you download it from a trusted and verified source. You can also read the user manual for this software from ManualsLib for more information and guidance.

        -

        We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

        -

        FAQs

        -
          -
        • Q: Is Ashampoo Magical Security 2.02 (Portable) free or paid?
        • -
        • A: It is a paid software, but you can download a free trial version from the official website or from Softpedia. The free trial version has limited features and expires after 10 days.
        • -
        • Q: Is Ashampoo Magical Security 2.02 (Portable) safe and legal to use?
        • -
        • A: It is safe and legal to use as long as you use it for legitimate purposes and do not violate any laws or copyrights. However, you have to be careful and responsible when using it, as some purposes, such as pirating, hacking, or spying, may be illegal or unethical in your jurisdiction.
        • -
        • Q: How strong is the encryption algorithm used by Ashampoo Magical Security 2.02 (Portable)?
        • -
        • A: It uses the US-government approved AES algorithm, which is considered very strong and secure. AES stands for Advanced Encryption Standard, and it has three modes: AES 128, AES 192, and AES 256. The higher the number, the stronger the encryption. Ashampoo Magical Security 2.02 (Portable) supports all three modes, and you can choose the one that suits your needs.
        • -
        • Q: Can I share encrypted files with others using Ashampoo Magical Security 2.02 (Portable)?
        • -
        • A: Yes, you can share encrypted files with others by creating self-extracting EXE archives that can be opened with a password. A self-extracting EXE archive is a file that contains both the encrypted data and the software to decrypt it. You can create such a file by selecting the "Create EXE" option from the main menu of the software. Then, you can send or transfer the file to anyone you want, and they can open it with the password you set.
        • -
        • Q: Can I use Ashampoo Magical Security 2.02 (Portable) on any computer or device?
        • -
        • A: You can use it on any Windows PC that supports USB drives or CD/DVD drives. You cannot use it on Mac, Linux, or mobile devices, as they have different operating systems and file systems. However, you can still access your encrypted data on these devices by using a compatible software that can read and decrypt AMS files.
        • -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Crack Keygen 2021BIM 360 Glue IPad App 2008 Keygen 2021.md b/spaces/tioseFevbu/cartoon-converter/scripts/Crack Keygen 2021BIM 360 Glue IPad App 2008 Keygen 2021.md deleted file mode 100644 index 45abf301c93a14ee364ec754cf21e0f0007938c3..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Crack Keygen 2021BIM 360 Glue IPad App 2008 Keygen 2021.md +++ /dev/null @@ -1,28 +0,0 @@ -
        -

        How to Use Crack KeygenBIM 360 Glue IPad App 2008 Keygen

        -

        If you are looking for a way to access and collaborate on BIM projects from your iPad, you might be interested in Crack KeygenBIM 360 Glue iPad App 2008 Keygen. This is a software tool that allows you to generate valid activation codes for the BIM 360 Glue iPad app, which is a cloud-based BIM management and collaboration solution.

        -

        Crack KeygenBIM 360 Glue IPad App 2008 Keygen


        Download File > https://urlcod.com/2uHvXI



        -

        With Crack KeygenBIM 360 Glue iPad App 2008 Keygen, you can enjoy the benefits of BIM 360 Glue on your iPad, such as viewing and navigating 3D models, accessing project data and documents, creating markups and issues, and collaborating with team members in real time. You can also sync your data with the cloud and access it from any device.

        -

        To use Crack KeygenBIM 360 Glue iPad App 2008 Keygen, you need to follow these steps:

        -
          -
        1. Download the Crack KeygenBIM 360 Glue iPad App 2008 Keygen from the link provided below.
        2. -
        3. Extract the zip file and run the keygen.exe file.
        4. -
        5. Select the BIM 360 Glue iPad App from the list of products and click on Generate.
        6. -
        7. Copy the activation code and paste it into the BIM 360 Glue iPad App when prompted.
        8. -
        9. Enjoy your full version of BIM 360 Glue iPad App.
        10. -
        -

        Note: This keygen is for educational purposes only. We do not condone piracy or illegal use of software. Please buy the original product if you like it.

        -

        - -

        One of the main features of BIM 360 Glue iPad App is the ability to view and navigate 3D models from your iPad. You can zoom, pan, rotate, and orbit the model using intuitive gestures. You can also switch between different views, such as perspective, orthographic, section, and walk-through. You can also use the app to measure distances, angles, and areas in the model.

        -

        Another feature of BIM 360 Glue iPad App is the ability to access project data and documents from your iPad. You can browse and open files from your BIM 360 Glue project, such as drawings, specifications, schedules, and reports. You can also view 2D sheets and documents in PDF format. You can also search for files by name, type, or date.

        -

        A third feature of BIM 360 Glue iPad App is the ability to create markups and issues from your iPad. You can use the app to annotate the model or document with text, shapes, symbols, stamps, and photos. You can also assign issues to team members and track their status and resolution. You can also share your markups and issues with other team members via email or BIM 360 Glue.

        - -

        Besides the features mentioned above, BIM 360 Glue iPad App also offers some benefits for BIM users and project stakeholders. Some of these benefits are:

        -
          -
        • Improved productivity and efficiency: By using BIM 360 Glue iPad App, you can access and review BIM models anytime, anywhere, without the need for a desktop computer or a network connection. You can also save time and bandwidth by syncing only the changes made to the models, instead of downloading the entire models every time.
        • -
        • Enhanced communication and collaboration: By using BIM 360 Glue iPad App, you can communicate and collaborate with your project team more effectively. You can share your feedback and insights with others by creating and sending markups and issues. You can also receive notifications when someone else makes a change or adds a comment to the model. You can also view the history of changes and comments made by others.
        • -
        • Better decision making and problem solving: By using BIM 360 Glue iPad App, you can make better decisions and solve problems faster. You can access intelligent property data and documents related to the model. You can also measure and compare different aspects of the model, such as distances, angles, areas, volumes, etc. You can also identify and resolve clashes and conflicts before they become costly issues.
        • -

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/wrappers.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/wrappers.py deleted file mode 100644 index e031ed7087556da5c760d01196046712e4392752..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/wrappers.py +++ /dev/null @@ -1,375 +0,0 @@ -import threading -from contextlib import contextmanager -import os -from os.path import abspath, join as pjoin -import shutil -from subprocess import check_call, check_output, STDOUT -import sys -from tempfile import mkdtemp - -from . import compat -from .in_process import _in_proc_script_path - -__all__ = [ - 'BackendUnavailable', - 'BackendInvalid', - 'HookMissing', - 'UnsupportedOperation', - 'default_subprocess_runner', - 'quiet_subprocess_runner', - 'Pep517HookCaller', -] - - -@contextmanager -def tempdir(): - td = mkdtemp() - try: - yield td - finally: - shutil.rmtree(td) - - -class BackendUnavailable(Exception): - """Will be raised if the backend cannot be imported in the hook process.""" - def __init__(self, traceback): - self.traceback = traceback - - -class BackendInvalid(Exception): - """Will be raised if the backend is invalid.""" - def __init__(self, backend_name, backend_path, message): - self.backend_name = backend_name - self.backend_path = backend_path - self.message = message - - -class HookMissing(Exception): - """Will be raised on missing hooks.""" - def __init__(self, hook_name): - super(HookMissing, self).__init__(hook_name) - self.hook_name = hook_name - - -class UnsupportedOperation(Exception): - """May be raised by build_sdist if the backend indicates that it can't.""" - def __init__(self, traceback): - self.traceback = traceback - - -def default_subprocess_runner(cmd, cwd=None, extra_environ=None): - """The default method of calling the wrapper subprocess.""" - env = os.environ.copy() - if extra_environ: - env.update(extra_environ) - - check_call(cmd, cwd=cwd, env=env) - - -def quiet_subprocess_runner(cmd, cwd=None, extra_environ=None): - """A method of calling the wrapper subprocess while suppressing output.""" - env = os.environ.copy() - if extra_environ: - env.update(extra_environ) - - check_output(cmd, cwd=cwd, env=env, stderr=STDOUT) - - -def norm_and_check(source_tree, requested): - """Normalise and check a backend path. - - Ensure that the requested backend path is specified as a relative path, - and resolves to a location under the given source tree. - - Return an absolute version of the requested path. - """ - if os.path.isabs(requested): - raise ValueError("paths must be relative") - - abs_source = os.path.abspath(source_tree) - abs_requested = os.path.normpath(os.path.join(abs_source, requested)) - # We have to use commonprefix for Python 2.7 compatibility. So we - # normalise case to avoid problems because commonprefix is a character - # based comparison :-( - norm_source = os.path.normcase(abs_source) - norm_requested = os.path.normcase(abs_requested) - if os.path.commonprefix([norm_source, norm_requested]) != norm_source: - raise ValueError("paths must be inside source tree") - - return abs_requested - - -class Pep517HookCaller(object): - """A wrapper around a source directory to be built with a PEP 517 backend. - - :param source_dir: The path to the source directory, containing - pyproject.toml. - :param build_backend: The build backend spec, as per PEP 517, from - pyproject.toml. - :param backend_path: The backend path, as per PEP 517, from pyproject.toml. - :param runner: A callable that invokes the wrapper subprocess. - :param python_executable: The Python executable used to invoke the backend - - The 'runner', if provided, must expect the following: - - - cmd: a list of strings representing the command and arguments to - execute, as would be passed to e.g. 'subprocess.check_call'. - - cwd: a string representing the working directory that must be - used for the subprocess. Corresponds to the provided source_dir. - - extra_environ: a dict mapping environment variable names to values - which must be set for the subprocess execution. - """ - def __init__( - self, - source_dir, - build_backend, - backend_path=None, - runner=None, - python_executable=None, - ): - if runner is None: - runner = default_subprocess_runner - - self.source_dir = abspath(source_dir) - self.build_backend = build_backend - if backend_path: - backend_path = [ - norm_and_check(self.source_dir, p) for p in backend_path - ] - self.backend_path = backend_path - self._subprocess_runner = runner - if not python_executable: - python_executable = sys.executable - self.python_executable = python_executable - - @contextmanager - def subprocess_runner(self, runner): - """A context manager for temporarily overriding the default subprocess - runner. - """ - prev = self._subprocess_runner - self._subprocess_runner = runner - try: - yield - finally: - self._subprocess_runner = prev - - def _supported_features(self): - """Return the list of optional features supported by the backend.""" - return self._call_hook('_supported_features', {}) - - def get_requires_for_build_wheel(self, config_settings=None): - """Identify packages required for building a wheel - - Returns a list of dependency specifications, e.g.:: - - ["wheel >= 0.25", "setuptools"] - - This does not include requirements specified in pyproject.toml. - It returns the result of calling the equivalently named hook in a - subprocess. - """ - return self._call_hook('get_requires_for_build_wheel', { - 'config_settings': config_settings - }) - - def prepare_metadata_for_build_wheel( - self, metadata_directory, config_settings=None, - _allow_fallback=True): - """Prepare a ``*.dist-info`` folder with metadata for this project. - - Returns the name of the newly created folder. - - If the build backend defines a hook with this name, it will be called - in a subprocess. If not, the backend will be asked to build a wheel, - and the dist-info extracted from that (unless _allow_fallback is - False). - """ - return self._call_hook('prepare_metadata_for_build_wheel', { - 'metadata_directory': abspath(metadata_directory), - 'config_settings': config_settings, - '_allow_fallback': _allow_fallback, - }) - - def build_wheel( - self, wheel_directory, config_settings=None, - metadata_directory=None): - """Build a wheel from this project. - - Returns the name of the newly created file. - - In general, this will call the 'build_wheel' hook in the backend. - However, if that was previously called by - 'prepare_metadata_for_build_wheel', and the same metadata_directory is - used, the previously built wheel will be copied to wheel_directory. - """ - if metadata_directory is not None: - metadata_directory = abspath(metadata_directory) - return self._call_hook('build_wheel', { - 'wheel_directory': abspath(wheel_directory), - 'config_settings': config_settings, - 'metadata_directory': metadata_directory, - }) - - def get_requires_for_build_editable(self, config_settings=None): - """Identify packages required for building an editable wheel - - Returns a list of dependency specifications, e.g.:: - - ["wheel >= 0.25", "setuptools"] - - This does not include requirements specified in pyproject.toml. - It returns the result of calling the equivalently named hook in a - subprocess. - """ - return self._call_hook('get_requires_for_build_editable', { - 'config_settings': config_settings - }) - - def prepare_metadata_for_build_editable( - self, metadata_directory, config_settings=None, - _allow_fallback=True): - """Prepare a ``*.dist-info`` folder with metadata for this project. - - Returns the name of the newly created folder. - - If the build backend defines a hook with this name, it will be called - in a subprocess. If not, the backend will be asked to build an editable - wheel, and the dist-info extracted from that (unless _allow_fallback is - False). - """ - return self._call_hook('prepare_metadata_for_build_editable', { - 'metadata_directory': abspath(metadata_directory), - 'config_settings': config_settings, - '_allow_fallback': _allow_fallback, - }) - - def build_editable( - self, wheel_directory, config_settings=None, - metadata_directory=None): - """Build an editable wheel from this project. - - Returns the name of the newly created file. - - In general, this will call the 'build_editable' hook in the backend. - However, if that was previously called by - 'prepare_metadata_for_build_editable', and the same metadata_directory - is used, the previously built wheel will be copied to wheel_directory. - """ - if metadata_directory is not None: - metadata_directory = abspath(metadata_directory) - return self._call_hook('build_editable', { - 'wheel_directory': abspath(wheel_directory), - 'config_settings': config_settings, - 'metadata_directory': metadata_directory, - }) - - def get_requires_for_build_sdist(self, config_settings=None): - """Identify packages required for building a wheel - - Returns a list of dependency specifications, e.g.:: - - ["setuptools >= 26"] - - This does not include requirements specified in pyproject.toml. - It returns the result of calling the equivalently named hook in a - subprocess. - """ - return self._call_hook('get_requires_for_build_sdist', { - 'config_settings': config_settings - }) - - def build_sdist(self, sdist_directory, config_settings=None): - """Build an sdist from this project. - - Returns the name of the newly created file. - - This calls the 'build_sdist' backend hook in a subprocess. - """ - return self._call_hook('build_sdist', { - 'sdist_directory': abspath(sdist_directory), - 'config_settings': config_settings, - }) - - def _call_hook(self, hook_name, kwargs): - # On Python 2, pytoml returns Unicode values (which is correct) but the - # environment passed to check_call needs to contain string values. We - # convert here by encoding using ASCII (the backend can only contain - # letters, digits and _, . and : characters, and will be used as a - # Python identifier, so non-ASCII content is wrong on Python 2 in - # any case). - # For backend_path, we use sys.getfilesystemencoding. - if sys.version_info[0] == 2: - build_backend = self.build_backend.encode('ASCII') - else: - build_backend = self.build_backend - extra_environ = {'PEP517_BUILD_BACKEND': build_backend} - - if self.backend_path: - backend_path = os.pathsep.join(self.backend_path) - if sys.version_info[0] == 2: - backend_path = backend_path.encode(sys.getfilesystemencoding()) - extra_environ['PEP517_BACKEND_PATH'] = backend_path - - with tempdir() as td: - hook_input = {'kwargs': kwargs} - compat.write_json(hook_input, pjoin(td, 'input.json'), - indent=2) - - # Run the hook in a subprocess - with _in_proc_script_path() as script: - python = self.python_executable - self._subprocess_runner( - [python, abspath(str(script)), hook_name, td], - cwd=self.source_dir, - extra_environ=extra_environ - ) - - data = compat.read_json(pjoin(td, 'output.json')) - if data.get('unsupported'): - raise UnsupportedOperation(data.get('traceback', '')) - if data.get('no_backend'): - raise BackendUnavailable(data.get('traceback', '')) - if data.get('backend_invalid'): - raise BackendInvalid( - backend_name=self.build_backend, - backend_path=self.backend_path, - message=data.get('backend_error', '') - ) - if data.get('hook_missing'): - raise HookMissing(data.get('missing_hook_name') or hook_name) - return data['return_val'] - - -class LoggerWrapper(threading.Thread): - """ - Read messages from a pipe and redirect them - to a logger (see python's logging module). - """ - - def __init__(self, logger, level): - threading.Thread.__init__(self) - self.daemon = True - - self.logger = logger - self.level = level - - # create the pipe and reader - self.fd_read, self.fd_write = os.pipe() - self.reader = os.fdopen(self.fd_read) - - self.start() - - def fileno(self): - return self.fd_write - - @staticmethod - def remove_newline(msg): - return msg[:-1] if msg.endswith(os.linesep) else msg - - def run(self): - for line in self.reader: - self._write(self.remove_newline(line)) - - def _write(self, message): - self.logger.log(self.level, message) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/single_stage.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/single_stage.py deleted file mode 100644 index d01ebf303be9aa9158a0dce246880e253f90db4e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/single_stage.py +++ /dev/null @@ -1,137 +0,0 @@ -import torch - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class SingleStageDetector(BaseDetector): - """Base class for single-stage detectors. - - Single-stage detectors directly and densely predict bounding boxes on the - output features of the backbone+neck. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg) - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - # get origin input shape to support onnx dynamic shape - if torch.onnx.is_in_onnx_export(): - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - # skip post-processing when exporting to ONNX - if torch.onnx.is_in_onnx_export(): - return bbox_list - - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in bbox_list - ] - return bbox_results - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - assert hasattr(self.bbox_head, 'aug_test'), \ - f'{self.bbox_head.__class__.__name__}' \ - ' does not support test-time augmentation' - - feats = self.extract_feats(imgs) - return [self.bbox_head.aug_test(feats, img_metas, rescale=rescale)] diff --git a/spaces/trttung1610/musicgen/docs/DATASETS.md b/spaces/trttung1610/musicgen/docs/DATASETS.md deleted file mode 100644 index b0890c03cf732450eb498559638c6b45d50e40c3..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/docs/DATASETS.md +++ /dev/null @@ -1,82 +0,0 @@ -# AudioCraft datasets - -Our dataset manifest files consist in 1-json-per-line files, potentially gzipped, -as `data.jsons` or `data.jsons.gz` files. This JSON contains the path to the audio -file and associated metadata. The manifest files are then provided in the configuration, -as `datasource` sub-configuration. A datasource contains the pointers to the paths of -the manifest files for each AudioCraft stage (or split) along with additional information -(eg. maximum sample rate to use against this dataset). All the datasources are under the -`dset` group config, with a dedicated configuration file for each dataset. - -## Getting started - -### Example - -See the provided example in the directory that provides a manifest to use the example dataset -provided under the [dataset folder](../dataset/example). - -The manifest files are stored in the [egs folder](../egs/example). - -```shell -egs/ - example/data.json.gz -``` - -A datasource is defined in the configuration folder, in the dset group config for this dataset -at [config/dset/audio/example](../config/dset/audio/example.yaml): - -```shell -# @package __global__ - -datasource: - max_sample_rate: 44100 - max_channels: 2 - - train: egs/example - valid: egs/example - evaluate: egs/example - generate: egs/example -``` - -For proper dataset, one should create manifest for each of the splits and specify the correct path -to the given manifest in the datasource for each split. - -Then, using a dataset through the configuration can be done pointing to the -corresponding dataset configuration: -```shell -dset= # should match the yaml file name - -# for example -dset=audio/example -``` - -### Creating manifest files - -Assuming you want to create manifest files to load with AudioCraft's AudioDataset, you can use -the following command to create new manifest files from a given folder containing audio files: - -```shell -python -m audiocraft.data.audio_dataset egs/my_dataset/my_dataset_split/data.jsonl.gz - -# For example to generate the manifest for dset=audio/example -# note: we don't use any split and we don't compress the jsonl file for this dummy example -python -m audiocraft.data.audio_dataset dataset/example egs/example/data.jsonl - -# More info with: python -m audiocraft.data.audio_dataset --help -``` - -## Additional information - -### MusicDataset and metadata - -The MusicDataset is an AudioDataset with additional metadata. The MusicDataset expects -the additional metadata to be stored in a JSON file that has the same path as the corresponding -audio file, but with a `.json` extension. - -### SoundDataset and metadata - -The SoundDataset is an AudioDataset with descriptions metadata. Similarly to the MusicDataset, -the SoundDataset expects the additional metadata to be stored in a JSON file that has the same -path as the corresponding audio file, but with a `.json` extension. Additionally, the SoundDataset -supports an additional parameter pointing to an extra folder `external_metadata_source` containing -all the JSON metadata files given they have the same filename as the audio file. diff --git a/spaces/trttung1610/musicgen/tests/data/test_audio_dataset.py b/spaces/trttung1610/musicgen/tests/data/test_audio_dataset.py deleted file mode 100644 index b591ea6137f48d0d97fcd1243c5f5d258670a474..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/tests/data/test_audio_dataset.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from functools import partial -from itertools import product -import json -import math -import os -import random -import typing as tp - -import pytest -import torch -from torch.utils.data import DataLoader - -from audiocraft.data.audio_dataset import ( - AudioDataset, - AudioMeta, - _get_audio_meta, - load_audio_meta, - save_audio_meta -) -from audiocraft.data.zip import PathInZip - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestAudioMeta(TempDirMixin): - - def test_get_audio_meta(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(duration * sample_rate) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path('sample.wav') - save_wav(path, wav, sample_rate) - m = _get_audio_meta(path, minimal=True) - assert m.path == path, 'path does not match' - assert m.sample_rate == sample_rate, 'sample rate does not match' - assert m.duration == duration, 'duration does not match' - assert m.amplitude is None - assert m.info_path is None - - def test_save_audio_meta(self): - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_audio_meta = [] - for idx, meta in enumerate([audio_meta, empty_audio_meta]): - path = self.get_temp_path(f'data_{idx}_save.jsonl') - save_audio_meta(path, meta) - with open(path, 'r') as f: - lines = f.readlines() - read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines] - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - assert m == read_m - - def test_load_audio_meta(self): - try: - import dora - except ImportError: - dora = None # type: ignore - - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_meta = [] - for idx, meta in enumerate([audio_meta, empty_meta]): - path = self.get_temp_path(f'data_{idx}_load.jsonl') - with open(path, 'w') as f: - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - f.write(json_str) - read_meta = load_audio_meta(path) - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - if dora: - m.path = dora.git_save.to_absolute_path(m.path) - assert m == read_m, f'original={m}, read={read_m}' - - -class TestAudioDataset(TempDirMixin): - - def _create_audio_files(self, - root_name: str, - num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1): - root_dir = self.get_temp_dir(root_name) - for i in range(num_examples): - if isinstance(durations, float): - duration = durations - elif isinstance(durations, tuple) and len(durations) == 1: - duration = durations[0] - elif isinstance(durations, tuple) and len(durations) == 2: - duration = random.uniform(durations[0], durations[1]) - else: - assert False - n_frames = int(duration * sample_rate) - wav = get_white_noise(channels, n_frames) - path = os.path.join(root_dir, f'example_{i}.wav') - save_wav(path, wav, sample_rate) - return root_dir - - def _create_audio_dataset(self, - root_name: str, - total_num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1, - segment_duration: tp.Optional[float] = None, - num_examples: int = 10, - shuffle: bool = True, - return_info: bool = False): - root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels) - dataset = AudioDataset.from_path(root_dir, - minimal_meta=True, - segment_duration=segment_duration, - num_samples=num_examples, - sample_rate=sample_rate, - channels=channels, - shuffle=shuffle, - return_info=return_info) - return dataset - - def test_dataset_full(self): - total_examples = 10 - min_duration, max_duration = 1., 4. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), - sample_rate=sample_rate, channels=channels, segment_duration=None) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] <= int(max_duration * sample_rate) - assert sample.shape[1] >= int(min_duration * sample_rate) - - def test_dataset_segment(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - - def test_dataset_equal_audio_and_segment_durations(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - # the random seek_time adds variability on audio read - sample_1 = dataset[0] - sample_2 = dataset[1] - assert not torch.allclose(sample_1, sample_2) - - def test_dataset_samples(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - - create_dataset = partial( - self._create_audio_dataset, - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, - ) - - dataset = create_dataset(shuffle=True) - # when shuffle = True, we have different inputs for the same index across epoch - sample_1 = dataset[0] - sample_2 = dataset[0] - assert not torch.allclose(sample_1, sample_2) - - dataset_noshuffle = create_dataset(shuffle=False) - # when shuffle = False, we have same inputs for the same index across epoch - sample_1 = dataset_noshuffle[0] - sample_2 = dataset_noshuffle[0] - assert torch.allclose(sample_1, sample_2) - - def test_dataset_return_info(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - assert segment_info.sample_rate == sample_rate - assert segment_info.total_frames == int(segment_duration * sample_rate) - assert segment_info.n_frames <= int(segment_duration * sample_rate) - assert segment_info.seek_time >= 0 - - def test_dataset_return_info_no_segment_duration(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = None - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == segment_info.total_frames - assert segment_info.sample_rate == sample_rate - assert segment_info.n_frames <= segment_info.total_frames - - def test_dataset_collate_fn(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - assert batch.shape[0] == batch_size - - @pytest.mark.parametrize("segment_duration", [1.0, None]) - def test_dataset_with_meta_collate_fn(self, segment_duration): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - collate_fn=dataset.collater, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - wav, infos = batch - assert wav.shape[0] == batch_size - assert len(infos) == batch_size - - @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [ - [1, True, True, 0.5, 0.5, 0.0], - [1, False, True, 0.25, 0.5, 0.25], - [1, True, False, 0.666, 0.333, 0.0], - [1, False, False, 0.333, 0.333, 0.333], - [None, False, False, 0.333, 0.333, 0.333]]) - def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist): - random.seed(1234) - rng = torch.Generator() - rng.manual_seed(1234) - - def _get_histogram(dataset, repetitions=20_000): - counts = {file_meta.path: 0. for file_meta in meta} - for _ in range(repetitions): - file_meta = dataset.sample_file(0, rng) - counts[file_meta.path] += 1 - return {name: count / repetitions for name, count in counts.items()} - - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset( - meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight, - sample_on_duration=sample_on_duration) - hist = _get_histogram(dataset) - assert math.isclose(hist['a'], a_hist, abs_tol=0.01) - assert math.isclose(hist['b'], b_hist, abs_tol=0.01) - assert math.isclose(hist['c'], c_hist, abs_tol=0.01) - - def test_meta_duration_filter_all(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - try: - AudioDataset(meta, segment_duration=11, min_segment_ratio=1) - assert False - except AssertionError: - assert True - - def test_meta_duration_filter_long(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7) - assert len(dataset) == 2 diff --git a/spaces/ttt246/brain/Brain/src/commands/__init__.py b/spaces/ttt246/brain/Brain/src/commands/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/uSerNameDDHL/bingo/src/components/settings.tsx b/spaces/uSerNameDDHL/bingo/src/components/settings.tsx deleted file mode 100644 index 45ba6044ff9cbe584f62292a49ea2ace9acc1f48..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/src/components/settings.tsx +++ /dev/null @@ -1,157 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0') - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
        - 图文示例: - 如何获取 BING_HEADER - - -
        - -
        - setCurlValue(e.target.value)} - /> -
        - 身份信息仅用于画图(推荐) - setImageOnly(checked)} - > - - -
        - - - - - - - -
        - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
        - 启用语音回答 - setEnableTTS(checked)} - > - - -
        - - - - -
        -
        - ) - } - return null -} diff --git a/spaces/ullasmrnva/LawBerta/README.md b/spaces/ullasmrnva/LawBerta/README.md deleted file mode 100644 index 405a92fb8b9cce07a18e2683b15e9ca633fd94b9..0000000000000000000000000000000000000000 --- a/spaces/ullasmrnva/LawBerta/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LawBerta -emoji: 📉 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ulysses115/diffsvc_test/utils/__init__.py b/spaces/ulysses115/diffsvc_test/utils/__init__.py deleted file mode 100644 index edd05b1cbcf86d489ce395ab90e50587c7bef4c6..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/utils/__init__.py +++ /dev/null @@ -1,250 +0,0 @@ -import glob -import logging -import re -import time -from collections import defaultdict -import os -import sys -import shutil -import types -import numpy as np -import torch -import torch.nn.functional as F -import torch.distributed as dist -from torch import nn - - -def tensors_to_scalars(metrics): - new_metrics = {} - for k, v in metrics.items(): - if isinstance(v, torch.Tensor): - v = v.item() - if type(v) is dict: - v = tensors_to_scalars(v) - new_metrics[k] = v - return new_metrics - - -class AvgrageMeter(object): - - def __init__(self): - self.reset() - - def reset(self): - self.avg = 0 - self.sum = 0 - self.cnt = 0 - - def update(self, val, n=1): - self.sum += val * n - self.cnt += n - self.avg = self.sum / self.cnt - - -def collate_1d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None, shift_id=1): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) if max_len is None else max_len - res = values[0].new(len(values), size).fill_(pad_idx) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if shift_right: - dst[1:] = src[:-1] - dst[0] = shift_id - else: - dst.copy_(src) - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)]) - return res - - -def collate_2d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None): - """Convert a list of 2d tensors into a padded 3d tensor.""" - size = max(v.size(0) for v in values) if max_len is None else max_len - res = values[0].new(len(values), size, values[0].shape[1]).fill_(pad_idx) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if shift_right: - dst[1:] = src[:-1] - else: - dst.copy_(src) - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)]) - return res - - -def _is_batch_full(batch, num_tokens, max_tokens, max_sentences): - if len(batch) == 0: - return 0 - if len(batch) == max_sentences: - return 1 - if num_tokens > max_tokens: - return 1 - return 0 - - -def batch_by_size( - indices, num_tokens_fn, max_tokens=None, max_sentences=None, - required_batch_size_multiple=1, distributed=False -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be a multiple of N (default: 1). - """ - max_tokens = max_tokens if max_tokens is not None else sys.maxsize - max_sentences = max_sentences if max_sentences is not None else sys.maxsize - bsz_mult = required_batch_size_multiple - - if isinstance(indices, types.GeneratorType): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - sample_len = 0 - sample_lens = [] - batch = [] - batches = [] - for i in range(len(indices)): - idx = indices[i] - num_tokens = num_tokens_fn(idx) - sample_lens.append(num_tokens) - sample_len = max(sample_len, num_tokens) - assert sample_len <= max_tokens, ( - "sentence at index {} of size {} exceeds max_tokens " - "limit of {}!".format(idx, sample_len, max_tokens) - ) - num_tokens = (len(batch) + 1) * sample_len - - if _is_batch_full(batch, num_tokens, max_tokens, max_sentences): - mod_len = max( - bsz_mult * (len(batch) // bsz_mult), - len(batch) % bsz_mult, - ) - batches.append(batch[:mod_len]) - batch = batch[mod_len:] - sample_lens = sample_lens[mod_len:] - sample_len = max(sample_lens) if len(sample_lens) > 0 else 0 - batch.append(idx) - if len(batch) > 0: - batches.append(batch) - return batches - - -def make_positions(tensor, padding_idx): - """Replace non-padding symbols with their position numbers. - - Position numbers begin at padding_idx+1. Padding symbols are ignored. - """ - # The series of casts and type-conversions here are carefully - # balanced to both work with ONNX export and XLA. In particular XLA - # prefers ints, cumsum defaults to output longs, and ONNX doesn't know - # how to handle the dtype kwarg in cumsum. - mask = tensor.ne(padding_idx).int() - return ( - torch.cumsum(mask, dim=1).type_as(mask) * mask - ).long() + padding_idx - - -def softmax(x, dim): - return F.softmax(x, dim=dim, dtype=torch.float32) - - -def unpack_dict_to_list(samples): - samples_ = [] - bsz = samples.get('outputs').size(0) - for i in range(bsz): - res = {} - for k, v in samples.items(): - try: - res[k] = v[i] - except: - pass - samples_.append(res) - return samples_ - - -def load_ckpt(cur_model, ckpt_base_dir, prefix_in_ckpt='model', force=True, strict=True): - if os.path.isfile(ckpt_base_dir): - base_dir = os.path.dirname(ckpt_base_dir) - checkpoint_path = [ckpt_base_dir] - else: - base_dir = ckpt_base_dir - checkpoint_path = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key= - lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x.replace('\\','/'))[0])) - if len(checkpoint_path) > 0: - checkpoint_path = checkpoint_path[-1] - state_dict = torch.load(checkpoint_path, map_location="cpu")["state_dict"] - state_dict = {k[len(prefix_in_ckpt) + 1:]: v for k, v in state_dict.items() - if k.startswith(f'{prefix_in_ckpt}.')} - if not strict: - cur_model_state_dict = cur_model.state_dict() - unmatched_keys = [] - for key, param in state_dict.items(): - if key in cur_model_state_dict: - new_param = cur_model_state_dict[key] - if new_param.shape != param.shape: - unmatched_keys.append(key) - print("| Unmatched keys: ", key, new_param.shape, param.shape) - for key in unmatched_keys: - del state_dict[key] - cur_model.load_state_dict(state_dict, strict=strict) - print(f"| load '{prefix_in_ckpt}' from '{checkpoint_path}'.") - else: - e_msg = f"| ckpt not found in {base_dir}." - if force: - assert False, e_msg - else: - print(e_msg) - - -def remove_padding(x, padding_idx=0): - if x is None: - return None - assert len(x.shape) in [1, 2] - if len(x.shape) == 2: # [T, H] - return x[np.abs(x).sum(-1) != padding_idx] - elif len(x.shape) == 1: # [T] - return x[x != padding_idx] - - -class Timer: - timer_map = {} - - def __init__(self, name, print_time=False): - if name not in Timer.timer_map: - Timer.timer_map[name] = 0 - self.name = name - self.print_time = print_time - - def __enter__(self): - self.t = time.time() - - def __exit__(self, exc_type, exc_val, exc_tb): - Timer.timer_map[self.name] += time.time() - self.t - if self.print_time: - print(self.name, Timer.timer_map[self.name]) - - -def print_arch(model, model_name='model'): - #print(f"| {model_name} Arch: ", model) - num_params(model, model_name=model_name) - - -def num_params(model, print_out=True, model_name="model"): - parameters = filter(lambda p: p.requires_grad, model.parameters()) - parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000 - if print_out: - print(f'| {model_name} Trainable Parameters: %.3fM' % parameters) - return parameters diff --git a/spaces/unstructuredio/receipt-parser/README.md b/spaces/unstructuredio/receipt-parser/README.md deleted file mode 100644 index 5a0ca69f3552304477308076e075f4f4ae90b621..0000000000000000000000000000000000000000 --- a/spaces/unstructuredio/receipt-parser/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Receipt Parser -emoji: 🧾 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Autocad 2012 x64 (64bit) (Product key and Xforce keygen) utorrent Everything You Need to Know.md b/spaces/usbethFlerru/sovits-modelsV2/example/Autocad 2012 x64 (64bit) (Product key and Xforce keygen) utorrent Everything You Need to Know.md deleted file mode 100644 index 94ac100d4e7a4bce794098a3a96a8ee4e1a5b5ea..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Autocad 2012 x64 (64bit) (Product key and Xforce keygen) utorrent Everything You Need to Know.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Autocad 2012 x64 (64bit) (Product key and Xforce keygen) utorrent


        DOWNLOAD ->>> https://urlcod.com/2uyV5h



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/vaibhavarduino/chatGPT-Wrapper/memory/memory.py b/spaces/vaibhavarduino/chatGPT-Wrapper/memory/memory.py deleted file mode 100644 index 86222629199e7b62a27b5d33e67b16d0843c3ad1..0000000000000000000000000000000000000000 --- a/spaces/vaibhavarduino/chatGPT-Wrapper/memory/memory.py +++ /dev/null @@ -1,109 +0,0 @@ -import json -import os - - -''' -This class allows storage of values in json format. It adds an easy -Interface to save values to disk. - -The SAVE FUNCTION MUST BE RAN BEFORE THE OBJECT IS DESTROYED. This was by design -to limit the amount of time spent writing to disk. - -This class creates a file called memory.json to store its data. If the -file doesnt exists it creates it and if it has already been created it loads -the data to memory. - - -example: - m = Memory() - name = m.get_data('firstName') - m.add_data('lastName', 'albert') - m.save() -''' -# this sets the path to the modules directory not the directory it was -# call from -module_path = os.path.dirname(__file__) - - -class Memory: - ''' - Initialize data with saved json file - ''' - - def __init__(self, mfile='memory.json'): - self.json_file = os.path.join(module_path, mfile) - self.data = '' - # Try to open file if it doesnt exist it will throw an error - try: - with open(self.json_file, 'r') as f: - self.data = json.load(f) - except IOError: - # create the new file with an empty json object - with open(self.json_file, 'w') as f: - f.write('{}') - # add the data to the memory object - with open(self.json_file, 'r') as f: - self.data = json.load(f) - - ''' - returns the json string - ''' - - def get_all(self): - return self.data - - ''' - get a specific key from memory - ''' - - def get_data(self, key): - try: - return self.data[key] - except BaseException: - return None - - ''' - add a key and value to memory - ''' - - def add_data(self, key, value): - if self.get_data(key) is not None: - print("data already exists with that name") - else: - self.data[key] = value - - ''' - Updates a key with supplied value. - ''' - - def update_data(self, key, value): - self.data[key] = value - - ''' - delete a key from memory - ''' - - def del_data(self, key): - try: - del self.data[key] - except KeyError: - pass - - ''' - !!!!DANGER!!!!! - deletes the entire memory and overwrites the file with a blank file - only use when absolutely needed. - ''' - - def del_all(self): - with open(self.json_file, 'w') as f: - f.write('') - - ''' - Saves memory to disk. This must be ran before memory object - is destroyed. Otherwise all changes will be lost. - ''' - - def save(self): - with open(self.json_file, 'w') as f: - json.dump(self.data, f) diff --git a/spaces/vakosta/Code2Comment/app.py b/spaces/vakosta/Code2Comment/app.py deleted file mode 100644 index c7d6725602887b0ea55bcebeadad5c94fd5b8349..0000000000000000000000000000000000000000 --- a/spaces/vakosta/Code2Comment/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/SEBIS/code_trans_t5_base_code_documentation_generation_java_transfer_learning_finetune").launch() \ No newline at end of file diff --git a/spaces/valhalla/glide-text2im/README.md b/spaces/valhalla/glide-text2im/README.md deleted file mode 100644 index 7aad3062d5e90b8c25c585ef123d153789cbdb24..0000000000000000000000000000000000000000 --- a/spaces/valhalla/glide-text2im/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Glide Text2im -emoji: 📊 -colorFrom: purple -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/venz/AW-03-GR-AI-Text2ArtGenerator/app.py b/spaces/venz/AW-03-GR-AI-Text2ArtGenerator/app.py deleted file mode 100644 index 1842b91661e1edd1167802f4093d3e887f662042..0000000000000000000000000000000000000000 --- a/spaces/venz/AW-03-GR-AI-Text2ArtGenerator/app.py +++ /dev/null @@ -1,228 +0,0 @@ -import os - -os.system("git clone --recursive https://github.com/JD-P/cloob-latent-diffusion") -os.system("cd cloob-latent-diffusion;pip install omegaconf pillow pytorch-lightning einops wandb ftfy regex ./CLIP") - -import argparse -from functools import partial -from pathlib import Path -import sys -sys.path.append('./cloob-latent-diffusion') -sys.path.append('./cloob-latent-diffusion/cloob-training') -sys.path.append('./cloob-latent-diffusion/latent-diffusion') -sys.path.append('./cloob-latent-diffusion/taming-transformers') -sys.path.append('./cloob-latent-diffusion/v-diffusion-pytorch') -from omegaconf import OmegaConf -from PIL import Image -import torch -from torch import nn -from torch.nn import functional as F -from torchvision import transforms -from torchvision.transforms import functional as TF -from tqdm import trange -from CLIP import clip -from cloob_training import model_pt, pretrained -import ldm.models.autoencoder -from diffusion import sampling, utils -import train_latent_diffusion as train -from huggingface_hub import hf_hub_url, cached_download -import random - -# Download the model files -checkpoint = cached_download(hf_hub_url("huggan/distill-ccld-wa", filename="model_student.ckpt")) -ae_model_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.ckpt")) -ae_config_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.yaml")) - -# Define a few utility functions - - -def parse_prompt(prompt, default_weight=3.): - if prompt.startswith('http://') or prompt.startswith('https://'): - vals = prompt.rsplit(':', 2) - vals = [vals[0] + ':' + vals[1], *vals[2:]] - else: - vals = prompt.rsplit(':', 1) - vals = vals + ['', default_weight][len(vals):] - return vals[0], float(vals[1]) - - -def resize_and_center_crop(image, size): - fac = max(size[0] / image.size[0], size[1] / image.size[1]) - image = image.resize((int(fac * image.size[0]), int(fac * image.size[1])), Image.LANCZOS) - return TF.center_crop(image, size[::-1]) - - -# Load the models -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') -print('Using device:', device) -print('loading models') - -# autoencoder -ae_config = OmegaConf.load(ae_config_path) -ae_model = ldm.models.autoencoder.AutoencoderKL(**ae_config.model.params) -ae_model.eval().requires_grad_(False).to(device) -ae_model.load_state_dict(torch.load(ae_model_path)) -n_ch, side_y, side_x = 4, 32, 32 - -# diffusion model -model = train.DiffusionModel(192, [1,1,2,2], autoencoder_scale=torch.tensor(4.3084)) -model.load_state_dict(torch.load(checkpoint, map_location='cpu')) -model = model.to(device).eval().requires_grad_(False) - -# CLOOB -cloob_config = pretrained.get_config('cloob_laion_400m_vit_b_16_16_epochs') -cloob = model_pt.get_pt_model(cloob_config) -checkpoint = pretrained.download_checkpoint(cloob_config) -cloob.load_state_dict(model_pt.get_pt_params(cloob_config, checkpoint)) -cloob.eval().requires_grad_(False).to(device) - - -# The key function: returns a list of n PIL images -def generate(n=1, prompts=['a red circle'], images=[], seed=42, steps=15, - method='plms', eta=None): - zero_embed = torch.zeros([1, cloob.config['d_embed']], device=device) - target_embeds, weights = [zero_embed], [] - - for prompt in prompts: - txt, weight = parse_prompt(prompt) - target_embeds.append(cloob.text_encoder(cloob.tokenize(txt).to(device)).float()) - weights.append(weight) - - for prompt in images: - path, weight = parse_prompt(prompt) - img = Image.open(utils.fetch(path)).convert('RGB') - clip_size = cloob.config['image_encoder']['image_size'] - img = resize_and_center_crop(img, (clip_size, clip_size)) - batch = TF.to_tensor(img)[None].to(device) - embed = F.normalize(cloob.image_encoder(cloob.normalize(batch)).float(), dim=-1) - target_embeds.append(embed) - weights.append(weight) - - weights = torch.tensor([1 - sum(weights), *weights], device=device) - - torch.manual_seed(seed) - - def cfg_model_fn(x, t): - n = x.shape[0] - n_conds = len(target_embeds) - x_in = x.repeat([n_conds, 1, 1, 1]) - t_in = t.repeat([n_conds]) - clip_embed_in = torch.cat([*target_embeds]).repeat_interleave(n, 0) - vs = model(x_in, t_in, clip_embed_in).view([n_conds, n, *x.shape[1:]]) - v = vs.mul(weights[:, None, None, None, None]).sum(0) - return v - - def run(x, steps): - if method == 'ddpm': - return sampling.sample(cfg_model_fn, x, steps, 1., {}) - if method == 'ddim': - return sampling.sample(cfg_model_fn, x, steps, eta, {}) - if method == 'prk': - return sampling.prk_sample(cfg_model_fn, x, steps, {}) - if method == 'plms': - return sampling.plms_sample(cfg_model_fn, x, steps, {}) - if method == 'pie': - return sampling.pie_sample(cfg_model_fn, x, steps, {}) - if method == 'plms2': - return sampling.plms2_sample(cfg_model_fn, x, steps, {}) - assert False - - batch_size = n - x = torch.randn([n, n_ch, side_y, side_x], device=device) - t = torch.linspace(1, 0, steps + 1, device=device)[:-1] - steps = utils.get_spliced_ddpm_cosine_schedule(t) - pil_ims = [] - for i in trange(0, n, batch_size): - cur_batch_size = min(n - i, batch_size) - out_latents = run(x[i:i+cur_batch_size], steps) - outs = ae_model.decode(out_latents * torch.tensor(2.55).to(device)) - for j, out in enumerate(outs): - pil_ims.append(utils.to_pil_image(out)) - - return pil_ims - - -import gradio as gr - -def gen_ims(prompt, im_prompt=None, seed=None, n_steps=10, method='plms'): - if seed == None : - seed = random.randint(0, 10000) - print( prompt, im_prompt, seed, n_steps) - prompts = [prompt] - im_prompts = [] - if im_prompt != None: - im_prompts = [im_prompt] - pil_ims = generate(n=1, prompts=prompts, images=im_prompts, seed=seed, steps=n_steps, method=method) - return pil_ims[0] - -iface = gr.Interface(fn=gen_ims, - inputs=[#gr.inputs.Slider(minimum=1, maximum=1, step=1, default=1,label="Number of images"), - #gr.inputs.Slider(minimum=0, maximum=200, step=1, label='Random seed', default=0), - gr.inputs.Textbox(label="Text prompt"), - gr.inputs.Image(optional=True, label="Image prompt", type='filepath'), - #gr.inputs.Slider(minimum=10, maximum=35, step=1, default=15,label="Number of steps") - ], - outputs=[gr.outputs.Image(type="pil", label="Generated Image")], - examples=[ - ["Virgin and Child, in the style of Jacopo Bellini"], - ["Katsushika Hokusai, The Dragon of Smoke Escaping from Mount Fuji"], - ["Moon Light Sonata by Basuki Abdullah"], - ["Twon Tree by M.C. Escher"], - ["Futurism, in the style of Wassily Kandinsky"], - ["Art Nouveau, in the style of John Singer Sargent"], - ["Surrealism, in the style of Edgar Degas"], - ["Expressionism, in the style of Wassily Kandinsky"], - ["Futurism, in the style of Egon Schiele"], - ["Neoclassicism, in the style of Gustav Klimt"], - ["Cubism, in the style of Gustav Klimt"], - ["Op Art, in the style of Marc Chagall"], - ["Romanticism, in the style of M.C. Escher"], - ["Futurism, in the style of M.C. Escher"], - ["Abstract Art, in the style of M.C. Escher"], - ["Mannerism, in the style of Paul Klee"], - ["Romanesque Art, in the style of Leonardo da Vinci"], - ["High Renaissance, in the style of Rembrandt"], - ["Magic Realism, in the style of Gustave Dore"], - ["Realism, in the style of Jean-Michel Basquiat"], - ["Art Nouveau, in the style of Paul Gauguin"], - ["Avant-garde, in the style of Pierre-Auguste Renoir"], - ["Baroque, in the style of Edward Hopper"], - ["Post-Impressionism, in the style of Wassily Kandinsky"], - ["Naturalism, in the style of Rene Magritte"], - ["Constructivism, in the style of Paul Cezanne"], - ["Abstract Expressionism, in the style of Henri Matisse"], - ["Pop Art, in the style of Vincent van Gogh"], - ["Futurism, in the style of Wassily Kandinsky"], - ["Futurism, in the style of Zdzislaw Beksinski"], - ['Surrealism, in the style of Salvador Dali'], - ["Aaron Wacker, oil on canvas"], - ["abstract"], - ["landscape"], - ["portrait"], - ["sculpture"], - ["genre painting"], - ["installation"], - ["photo"], - ["figurative"], - ["illustration"], - ["still life"], - ["history painting"], - ["cityscape"], - ["marina"], - ["animal painting"], - ["design"], - ["calligraphy"], - ["symbolic painting"], - ["graffiti"], - ["performance"], - ["mythological painting"], - ["battle painting"], - ["self-portrait"], - ["Impressionism, oil on canvas"] - ], - title='Art Generator and Style Mixer from 🧠 Cloob and 🎨 WikiArt - Visual Art Encyclopedia:', - description="Trained on images from the [WikiArt](https://www.wikiart.org/) dataset, comprised of visual arts", - article = 'Model used is: [model card](https://huggingface.co/huggan/distill-ccld-wa)..' - -) -iface.launch(enable_queue=True) # , debug=True for colab debugging \ No newline at end of file diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/data/labels.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/data/labels.py deleted file mode 100644 index f175fa8a690092925f7ce7c21afa42ecb9ff22e1..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/data/labels.py +++ /dev/null @@ -1,63 +0,0 @@ -import numpy as np - -import torch - - -class_groups = { - # group : indices (assuming 0th position is id) - 0: (), - 1: (1, 2, 3), - 2: (4, 5), - 3: (6, 7), - 4: (8, 9), - 5: (10, 11, 12, 13), - 6: (14, 15), - 7: (16, 17, 18), - 8: (19, 20, 21, 22, 23, 24, 25), - 9: (26, 27, 28), - 10: (29, 30, 31), - 11: (32, 33, 34, 35, 36, 37), -} - - -class_groups_indices = {g: np.array(ixs)-1 for g, ixs in class_groups.items()} - - -hierarchy = { - # group : parent (group, label) - 2: (1, 1), - 3: (2, 1), - 4: (2, 1), - 5: (2, 1), - 7: (1, 0), - 8: (6, 0), - 9: (2, 0), - 10: (4, 0), - 11: (4, 0), -} - - -def make_galaxy_labels_hierarchical(labels: torch.Tensor) -> torch.Tensor: - """ transform groups of galaxy label probabilities to follow the hierarchical order defined in galaxy zoo - more info here: https://www.kaggle.com/c/galaxy-zoo-the-galaxy-challenge/overview/the-galaxy-zoo-decision-tree - labels is a NxL torch tensor, where N is the batch size and L is the number of labels, - all labels should be > 1 - the indices of label groups are listed in class_groups_indices - - Return - ------ - hierarchical_labels : NxL torch tensor, where L is the total number of labels - """ - shift = labels.shape[1] > 37 ## in case the id is included at 0th position, shift indices accordingly - index = lambda i: class_groups_indices[i] + shift - - for i in range(1, 12): - ## normalize probabilities to 1 - norm = torch.sum(labels[:, index(i)], dim=1, keepdims=True) - norm[norm == 0] += 1e-4 ## add small number to prevent NaNs dividing by zero, yet keep track of gradient - labels[:, index(i)] /= norm - ## renormalize according to hierarchical structure - if i not in [1, 6]: - parent_group_label = labels[:, index(hierarchy[i][0])] - labels[:, index(i)] *= parent_group_label[:, hierarchy[i][1]].unsqueeze(-1) - return labels diff --git a/spaces/vroy02243/ML/README.md b/spaces/vroy02243/ML/README.md deleted file mode 100644 index 5bd9a4cbc8ed89a9361960e39201e374de785c19..0000000000000000000000000000000000000000 --- a/spaces/vroy02243/ML/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ML -emoji: 📈 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.49.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vumichien/Generate_human_motion/VQ-Trans/dataset/dataset_tokenize.py b/spaces/vumichien/Generate_human_motion/VQ-Trans/dataset/dataset_tokenize.py deleted file mode 100644 index 641a02a75f2cfaadea45851cad2a95b39bfa1eae..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/VQ-Trans/dataset/dataset_tokenize.py +++ /dev/null @@ -1,117 +0,0 @@ -import torch -from torch.utils import data -import numpy as np -from os.path import join as pjoin -import random -import codecs as cs -from tqdm import tqdm - - - -class VQMotionDataset(data.Dataset): - def __init__(self, dataset_name, feat_bias = 5, window_size = 64, unit_length = 8): - self.window_size = window_size - self.unit_length = unit_length - self.feat_bias = feat_bias - - self.dataset_name = dataset_name - min_motion_len = 40 if dataset_name =='t2m' else 24 - - if dataset_name == 't2m': - self.data_root = './dataset/HumanML3D' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 22 - radius = 4 - fps = 20 - self.max_motion_length = 196 - dim_pose = 263 - self.meta_dir = 'checkpoints/t2m/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - #kinematic_chain = paramUtil.t2m_kinematic_chain - elif dataset_name == 'kit': - self.data_root = './dataset/KIT-ML' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 21 - radius = 240 * 8 - fps = 12.5 - dim_pose = 251 - self.max_motion_length = 196 - self.meta_dir = 'checkpoints/kit/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - #kinematic_chain = paramUtil.kit_kinematic_chain - - joints_num = self.joints_num - - mean = np.load(pjoin(self.meta_dir, 'mean.npy')) - std = np.load(pjoin(self.meta_dir, 'std.npy')) - - split_file = pjoin(self.data_root, 'train.txt') - - data_dict = {} - id_list = [] - with cs.open(split_file, 'r') as f: - for line in f.readlines(): - id_list.append(line.strip()) - - new_name_list = [] - length_list = [] - for name in tqdm(id_list): - try: - motion = np.load(pjoin(self.motion_dir, name + '.npy')) - if (len(motion)) < min_motion_len or (len(motion) >= 200): - continue - - data_dict[name] = {'motion': motion, - 'length': len(motion), - 'name': name} - new_name_list.append(name) - length_list.append(len(motion)) - except: - # Some motion may not exist in KIT dataset - pass - - - self.mean = mean - self.std = std - self.length_arr = np.array(length_list) - self.data_dict = data_dict - self.name_list = new_name_list - - def inv_transform(self, data): - return data * self.std + self.mean - - def __len__(self): - return len(self.data_dict) - - def __getitem__(self, item): - name = self.name_list[item] - data = self.data_dict[name] - motion, m_length = data['motion'], data['length'] - - m_length = (m_length // self.unit_length) * self.unit_length - - idx = random.randint(0, len(motion) - m_length) - motion = motion[idx:idx+m_length] - - "Z Normalization" - motion = (motion - self.mean) / self.std - - return motion, name - -def DATALoader(dataset_name, - batch_size = 1, - num_workers = 8, unit_length = 4) : - - train_loader = torch.utils.data.DataLoader(VQMotionDataset(dataset_name, unit_length=unit_length), - batch_size, - shuffle=True, - num_workers=num_workers, - #collate_fn=collate_fn, - drop_last = True) - - return train_loader - -def cycle(iterable): - while True: - for x in iterable: - yield x diff --git a/spaces/xiang-wuu/yolov5/models/__init__.py b/spaces/xiang-wuu/yolov5/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/xuxw98/TAPA/howto/train_redpajama.md b/spaces/xuxw98/TAPA/howto/train_redpajama.md deleted file mode 100644 index eb012659806b5e6d348eafef16182c27b8d9db09..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/howto/train_redpajama.md +++ /dev/null @@ -1,133 +0,0 @@ -# Pre-train LLaMA on RedPajama - -This howto will walk you through setting up the RedPajama dataset and launching the pre-training script. - -## What's RedPajama - -[RedPajama](https://github.com/togethercomputer/RedPajama-Data) is an open-source reproduction of the original LLaMA training dataset. - -It contains a total of 1.2 trillion tokens, divided into - -```text -Commoncrawl 878B -C4 175B -GitHub 59B -Books 26B -ArXiv 28B -Wikipedia 24B -StackExchange 20B -``` - -The [RedPajama repo](https://github.com/togethercomputer/RedPajama-Data) contains the source code for collecting and preparing -the dataset, and it is Apache 2.0 licensed. - -The data itself is licensed according to the original licenses with which its invidivdual parts were released. -The GitHub datasets are limited to MIT, BSD, or Apache 2.0 repositories. - -Along with the full [RedPajama-1T dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T), -the [RedPajama-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample) 1B sample dataset -is also available for development. - -You can download the data using git lfs: - -```bash -# Make sure you have git-lfs installed (https://git-lfs.com): git lfs install -git clone https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T data/RedPajama-Data-1T -``` - -```bash -# Make sure you have git-lfs installed (https://git-lfs.com): git lfs install -git clone https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample data/RedPajama-Data-1T-Sample -``` - -## Prepare RedPajama for training - -The dataset consists of 2084 `jsonl` files (the sample dataset contains 11). In order to start pre-training lit-llama -on it, you need to read, tokenize, and write the data in binary chunks. This will leverage the `PackedDataset` -streaming dataset that comes with lit-llama. - -Do to so, run - -```bash -python scripts/prepare_redpajama.py --source_path data/RedPajama-Data-1T --tokenizer_path checkpoints/lit-llama/tokenizer.model --destination_path data/lit-redpajama -``` - -or - -```bash -python scripts/prepare_redpajama.py --source_path data/RedPajama-Data-1T-Sample --tokenizer_path checkpoints/lit-llama/tokenizer.model --destination_path data/lit-redpajama-sample --sample True -``` - -for the sample dataset. - -In the above we are assuming that you will be using the same tokenizer as used in LLaMA, but any trained [SentencePiece](https://github.com/google/sentencepiece) tokenizer with a 32000 vocabulary size will do here. - -The script will take a while to run, so time for :tea: - -## Pre-training - -Running the pre-training script requires at least 4 GPUs with 40GB+ each (A100). - -```bash -python pretrain/redpajama.py --devices 4 --train_data_dir data/lit-redpajama -``` - -For running on the sample dataset: - -```bash -python pretrain/redpajama.py --devices 4 --train_data_dir data/lit-redpajama-sample -``` - -The script will save checkpoints periodically to the folder `out/`. - -The `train_redpajama.py` script will pre-train the LLaMA 7B model with FSDP in -`bfloat16` precision and gradient accumulation. - -You can easily change the size of the model by passing a different string to - -```python -config = LLaMAConfig.from_name("7B") -``` - -in the `main` function. - -Keep in mind that the original LLaMA training for the 7B model required 83k A100 80GB -hours, so you'll need access to a cluster. - -Once you're in a cluster, you can follow [these instructions](https://lightning.ai/docs/fabric/stable/guide/multi_node/other.html) -to launch the script across machines: - -- [SLURM cluster](https://lightning.ai/docs/fabric/stable/guide/multi_node/slurm.html) -- [Barebones cluster](https://lightning.ai/docs/fabric/stable/guide/multi_node/barebones.html) -- [MPI](https://lightning.ai/docs/fabric/stable/guide/multi_node/other.html) - -The script contains several configurations and hyperparameters you can tweak: - -```python -out_dir = "out/training" -save_interval = 1000 -eval_interval = 1000 -eval_iters = 100 -log_interval = 1 - -# Hyperparameters -learning_rate = 6e-4 -batch_size = 125 -micro_batch_size = 5 -max_iters = 600000 # num_epochs * (epoch_size // micro_batch_size) // devices -weight_decay = 1e-1 -beta1 = 0.9 -beta2 = 0.95 -grad_clip = 1.0 -decay_lr = True -warmup_iters = 2000 -lr_decay_iters = max_iters -min_lr = 6e-5 -``` - -In particular, `micro_batch_size` should be adjusted so the process will use the available -GPU memory. - -Last, logging is kept minimal in the script. In order to use a particular logger -please refer to or -call a logging client library like `wandb` directly. diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/ControlPane/LineGraph/LineGraphCanvas.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/ControlPane/LineGraph/LineGraphCanvas.tsx deleted file mode 100644 index bd49bbb1a8aa585fad05c7ea4b43fa5d017daada..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/ControlPane/LineGraph/LineGraphCanvas.tsx +++ /dev/null @@ -1,96 +0,0 @@ -import { GLCanvas, Transform } from "@ryohey/webgl-react" -import { observer } from "mobx-react-lite" -import { CSSProperties, FC, useMemo } from "react" -import { IPoint, IRect } from "../../../../common/geometry" -import { ControlCoordTransform } from "../../../../common/transform/ControlCoordTransform" -import { matrixFromTranslation } from "../../../helpers/matrix" -import { useStores } from "../../../hooks/useStores" -import { Beats } from "../../GLNodes/Beats" -import { Cursor } from "../../GLNodes/Cursor" -import { Selection } from "../../GLNodes/Selection" -import { LineGraphItems } from "./LineGraphItems" - -interface IDValue { - id: number -} - -export interface LineGraphCanvasProps { - width: number - height: number - maxValue: number - items: (IPoint & IDValue)[] - controlPoints: (IRect & IDValue)[] - style?: CSSProperties - onMouseDown: React.MouseEventHandler - onContextMenu: React.MouseEventHandler -} - -const lineWidth = 2 - -export const LineGraphCanvas: FC = observer( - ({ - items, - width, - height, - style, - maxValue, - controlPoints, - onMouseDown, - onContextMenu, - }) => { - const rootStore = useStores() - - const { - selection, - scrollLeft, - selectedEventIds, - cursorX, - transform, - rulerStore: { beats }, - } = rootStore.controlStore - - const controlTransform = useMemo( - () => - new ControlCoordTransform( - transform.pixelsPerTick, - maxValue, - height, - lineWidth, - ), - [transform.pixelsPerTick, maxValue, height], - ) - - const selectionRect = - selection !== null ? controlTransform.transformSelection(selection) : null - - const scrollXMatrix = useMemo( - () => matrixFromTranslation(-Math.floor(scrollLeft), 0), - [scrollLeft], - ) - - return ( - - - - - - - - - ) - }, -) diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/pretrain.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/pretrain.py deleted file mode 100644 index c9490ec8eb8ff5f074b5772ada55cd27ec673a12..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/pretrain.py +++ /dev/null @@ -1,173 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.backends.cudnn as cudnn -import torch.distributed as dist -from torch.utils.data import DataLoader - -from models.blip_pretrain import blip_pretrain -import utils -from utils import warmup_lr_schedule, step_lr_schedule -from data import create_dataset, create_sampler, create_loader - -def train(model, data_loader, optimizer, epoch, device, config): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=50, fmt='{value:.6f}')) - metric_logger.add_meter('loss_ita', utils.SmoothedValue(window_size=50, fmt='{value:.4f}')) - metric_logger.add_meter('loss_itm', utils.SmoothedValue(window_size=50, fmt='{value:.4f}')) - metric_logger.add_meter('loss_lm', utils.SmoothedValue(window_size=50, fmt='{value:.4f}')) - - header = 'Train Epoch: [{}]'.format(epoch) - print_freq = 50 - - if config['laion_path']: - data_loader.dataset.reload_laion(epoch) - - data_loader.sampler.set_epoch(epoch) - - for i, (image, caption) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - - if epoch==0: - warmup_lr_schedule(optimizer, i, config['warmup_steps'], config['warmup_lr'], config['init_lr']) - - optimizer.zero_grad() - - image = image.to(device,non_blocking=True) - - # ramp up alpha in the first 2 epochs - alpha = config['alpha']*min(1,(epoch*len(data_loader)+i)/(2*len(data_loader))) - - loss_ita, loss_itm, loss_lm = model(image, caption, alpha = alpha) - loss = loss_ita + loss_itm + loss_lm - - loss.backward() - optimizer.step() - - metric_logger.update(loss_ita=loss_ita.item()) - metric_logger.update(loss_itm=loss_itm.item()) - metric_logger.update(loss_lm=loss_lm.item()) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating dataset") - datasets = [create_dataset('pretrain', config, min_scale=0.2)] - print('number of training samples: %d'%len(datasets[0])) - - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler(datasets, [True], num_tasks, global_rank) - - data_loader = create_loader(datasets,samplers,batch_size=[config['batch_size']], num_workers=[4], is_trains=[True], collate_fns=[None])[0] - - #### Model #### - print("Creating model") - model = blip_pretrain(image_size=config['image_size'], vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], - vit_ckpt_layer=config['vit_ckpt_layer'], queue_size=config['queue_size']) - - model = model.to(device) - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - start_epoch = 0 - if args.checkpoint: - checkpoint = torch.load(args.checkpoint, map_location='cpu') - state_dict = checkpoint['model'] - model.load_state_dict(state_dict) - - optimizer.load_state_dict(checkpoint['optimizer']) - start_epoch = checkpoint['epoch']+1 - print('resume checkpoint from %s'%args.checkpoint) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - print("Start training") - start_time = time.time() - for epoch in range(start_epoch, config['max_epoch']): - - step_lr_schedule(optimizer, epoch, config['init_lr'], config['min_lr'], config['lr_decay_rate']) - - train_stats = train(model, data_loader, optimizer, epoch, device, config) - if utils.is_main_process(): - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - 'epoch': epoch, - } - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_%02d.pth'%epoch)) - - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - dist.barrier() - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/pretrain.yaml') - parser.add_argument('--output_dir', default='output/Pretrain') - parser.add_argument('--checkpoint', default='') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/data/__init__.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/data/__init__.py deleted file mode 100644 index c6adb4bb6a926af7a46aaec4794eee95fda02a33..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/data/__init__.py +++ /dev/null @@ -1,100 +0,0 @@ -import importlib -import numpy as np -import random -import torch -import torch.utils.data -from copy import deepcopy -from functools import partial -from os import path as osp - -from basicsr.data.prefetch_dataloader import PrefetchDataLoader -from basicsr.utils import get_root_logger, scandir -from basicsr.utils.dist_util import get_dist_info -from basicsr.utils.registry import DATASET_REGISTRY - -__all__ = ['build_dataset', 'build_dataloader'] - -# automatically scan and import dataset modules for registry -# scan all the files under the data folder with '_dataset' in file names -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name in dataset_filenames] - - -def build_dataset(dataset_opt): - """Build dataset from options. - - Args: - dataset_opt (dict): Configuration for dataset. It must constain: - name (str): Dataset name. - type (str): Dataset type. - """ - dataset_opt = deepcopy(dataset_opt) - dataset = DATASET_REGISTRY.get(dataset_opt['type'])(dataset_opt) - logger = get_root_logger() - logger.info(f'Dataset [{dataset.__class__.__name__}] - {dataset_opt["name"]} ' 'is built.') - return dataset - - -def build_dataloader(dataset, dataset_opt, num_gpu=1, dist=False, sampler=None, seed=None): - """Build dataloader. - - Args: - dataset (torch.utils.data.Dataset): Dataset. - dataset_opt (dict): Dataset options. It contains the following keys: - phase (str): 'train' or 'val'. - num_worker_per_gpu (int): Number of workers for each GPU. - batch_size_per_gpu (int): Training batch size for each GPU. - num_gpu (int): Number of GPUs. Used only in the train phase. - Default: 1. - dist (bool): Whether in distributed training. Used only in the train - phase. Default: False. - sampler (torch.utils.data.sampler): Data sampler. Default: None. - seed (int | None): Seed. Default: None - """ - phase = dataset_opt['phase'] - rank, _ = get_dist_info() - if phase == 'train': - if dist: # distributed training - batch_size = dataset_opt['batch_size_per_gpu'] - num_workers = dataset_opt['num_worker_per_gpu'] - else: # non-distributed training - multiplier = 1 if num_gpu == 0 else num_gpu - batch_size = dataset_opt['batch_size_per_gpu'] * multiplier - num_workers = dataset_opt['num_worker_per_gpu'] * multiplier - dataloader_args = dict( - dataset=dataset, - batch_size=batch_size, - shuffle=False, - num_workers=num_workers, - sampler=sampler, - drop_last=True) - if sampler is None: - dataloader_args['shuffle'] = True - dataloader_args['worker_init_fn'] = partial( - worker_init_fn, num_workers=num_workers, rank=rank, seed=seed) if seed is not None else None - elif phase in ['val', 'test']: # validation - dataloader_args = dict(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - else: - raise ValueError(f'Wrong dataset phase: {phase}. ' "Supported ones are 'train', 'val' and 'test'.") - - dataloader_args['pin_memory'] = dataset_opt.get('pin_memory', False) - - prefetch_mode = dataset_opt.get('prefetch_mode') - if prefetch_mode == 'cpu': # CPUPrefetcher - num_prefetch_queue = dataset_opt.get('num_prefetch_queue', 1) - logger = get_root_logger() - logger.info(f'Use {prefetch_mode} prefetch dataloader: ' f'num_prefetch_queue = {num_prefetch_queue}') - return PrefetchDataLoader(num_prefetch_queue=num_prefetch_queue, **dataloader_args) - else: - # prefetch_mode=None: Normal dataloader - # prefetch_mode='cuda': dataloader for CUDAPrefetcher - return torch.utils.data.DataLoader(**dataloader_args) - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # Set the worker seed to num_workers * rank + worker_id + seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/maskformer/configuration_maskformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/maskformer/configuration_maskformer.py deleted file mode 100644 index baf907ee53c026724ff26e6f5a14d399e84c8c33..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/maskformer/configuration_maskformer.py +++ /dev/null @@ -1,201 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Meta Platforms, Inc.and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" MaskFormer model configuration""" -from typing import Dict, Optional - -from ...configuration_utils import PretrainedConfig -from ...utils import logging -from ..auto import CONFIG_MAPPING -from ..detr import DetrConfig -from ..swin import SwinConfig - - -MASKFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "facebook/maskformer-swin-base-ade": ( - "https://huggingface.co/facebook/maskformer-swin-base-ade/blob/main/config.json" - ) - # See all MaskFormer models at https://huggingface.co/models?filter=maskformer -} - -logger = logging.get_logger(__name__) - - -class MaskFormerConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`MaskFormerModel`]. It is used to instantiate a - MaskFormer model according to the specified arguments, defining the model architecture. Instantiating a - configuration with the defaults will yield a similar configuration to that of the MaskFormer - [facebook/maskformer-swin-base-ade](https://huggingface.co/facebook/maskformer-swin-base-ade) architecture trained - on [ADE20k-150](https://huggingface.co/datasets/scene_parse_150). - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Currently, MaskFormer only supports the [Swin Transformer](swin) as backbone. - - Args: - mask_feature_size (`int`, *optional*, defaults to 256): - The masks' features size, this value will also be used to specify the Feature Pyramid Network features' - size. - no_object_weight (`float`, *optional*, defaults to 0.1): - Weight to apply to the null (no object) class. - use_auxiliary_loss(`bool`, *optional*, defaults to `False`): - If `True` [`MaskFormerForInstanceSegmentationOutput`] will contain the auxiliary losses computed using the - logits from each decoder's stage. - backbone_config (`Dict`, *optional*): - The configuration passed to the backbone, if unset, the configuration corresponding to - `swin-base-patch4-window12-384` will be used. - decoder_config (`Dict`, *optional*): - The configuration passed to the transformer decoder model, if unset the base config for `detr-resnet-50` - will be used. - init_std (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - init_xavier_std (`float`, *optional*, defaults to 1): - The scaling factor used for the Xavier initialization gain in the HM Attention map module. - dice_weight (`float`, *optional*, defaults to 1.0): - The weight for the dice loss. - cross_entropy_weight (`float`, *optional*, defaults to 1.0): - The weight for the cross entropy loss. - mask_weight (`float`, *optional*, defaults to 20.0): - The weight for the mask loss. - output_auxiliary_logits (`bool`, *optional*): - Should the model output its `auxiliary_logits` or not. - - Raises: - `ValueError`: - Raised if the backbone model type selected is not in `["swin"]` or the decoder model type selected is not - in `["detr"]` - - Examples: - - ```python - >>> from transformers import MaskFormerConfig, MaskFormerModel - - >>> # Initializing a MaskFormer facebook/maskformer-swin-base-ade configuration - >>> configuration = MaskFormerConfig() - - >>> # Initializing a model (with random weights) from the facebook/maskformer-swin-base-ade style configuration - >>> model = MaskFormerModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ``` - - """ - model_type = "maskformer" - attribute_map = {"hidden_size": "mask_feature_size"} - backbones_supported = ["resnet", "swin"] - decoders_supported = ["detr"] - - def __init__( - self, - fpn_feature_size: int = 256, - mask_feature_size: int = 256, - no_object_weight: float = 0.1, - use_auxiliary_loss: bool = False, - backbone_config: Optional[Dict] = None, - decoder_config: Optional[Dict] = None, - init_std: float = 0.02, - init_xavier_std: float = 1.0, - dice_weight: float = 1.0, - cross_entropy_weight: float = 1.0, - mask_weight: float = 20.0, - output_auxiliary_logits: Optional[bool] = None, - **kwargs, - ): - if backbone_config is None: - # fall back to https://huggingface.co/microsoft/swin-base-patch4-window12-384-in22k - backbone_config = SwinConfig( - image_size=384, - in_channels=3, - patch_size=4, - embed_dim=128, - depths=[2, 2, 18, 2], - num_heads=[4, 8, 16, 32], - window_size=12, - drop_path_rate=0.3, - out_features=["stage1", "stage2", "stage3", "stage4"], - ) - - if isinstance(backbone_config, dict): - backbone_model_type = backbone_config.pop("model_type") - config_class = CONFIG_MAPPING[backbone_model_type] - backbone_config = config_class.from_dict(backbone_config) - - # verify that the backbone is supported - if backbone_config.model_type not in self.backbones_supported: - logger.warning_once( - f"Backbone {backbone_config.model_type} is not a supported model and may not be compatible with MaskFormer. " - f"Supported model types: {','.join(self.backbones_supported)}" - ) - - if decoder_config is None: - # fall back to https://huggingface.co/facebook/detr-resnet-50 - decoder_config = DetrConfig() - else: - # verify that the decoder is supported - decoder_type = ( - decoder_config.pop("model_type") if isinstance(decoder_config, dict) else decoder_config.model_type - ) - if decoder_type not in self.decoders_supported: - raise ValueError( - f"Transformer Decoder {decoder_type} not supported, please use one of" - f" {','.join(self.decoders_supported)}" - ) - if isinstance(decoder_config, dict): - config_class = CONFIG_MAPPING[decoder_type] - decoder_config = config_class.from_dict(decoder_config) - - self.backbone_config = backbone_config - self.decoder_config = decoder_config - # main feature dimension for the model - self.fpn_feature_size = fpn_feature_size - self.mask_feature_size = mask_feature_size - # initializer - self.init_std = init_std - self.init_xavier_std = init_xavier_std - # Hungarian matcher && loss - self.cross_entropy_weight = cross_entropy_weight - self.dice_weight = dice_weight - self.mask_weight = mask_weight - self.use_auxiliary_loss = use_auxiliary_loss - self.no_object_weight = no_object_weight - self.output_auxiliary_logits = output_auxiliary_logits - - self.num_attention_heads = self.decoder_config.encoder_attention_heads - self.num_hidden_layers = self.decoder_config.num_hidden_layers - super().__init__(**kwargs) - - @classmethod - def from_backbone_and_decoder_configs( - cls, backbone_config: PretrainedConfig, decoder_config: PretrainedConfig, **kwargs - ): - """Instantiate a [`MaskFormerConfig`] (or a derived class) from a pre-trained backbone model configuration and DETR model - configuration. - - Args: - backbone_config ([`PretrainedConfig`]): - The backbone configuration. - decoder_config ([`PretrainedConfig`]): - The transformer decoder configuration to use. - - Returns: - [`MaskFormerConfig`]: An instance of a configuration object - """ - return cls( - backbone_config=backbone_config, - decoder_config=decoder_config, - **kwargs, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/phobert/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/phobert/__init__.py deleted file mode 100644 index c974d994eca0322462ec7d97ce96728c9cb4ba24..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/phobert/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ...utils import _LazyModule - - -_import_structure = {"tokenization_phobert": ["PhobertTokenizer"]} - - -if TYPE_CHECKING: - from .tokenization_phobert import PhobertTokenizer - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yuszeying/textgenerator/app.py b/spaces/yuszeying/textgenerator/app.py deleted file mode 100644 index f1d4beb0a8f3cee27903f527b6bf8daa485a75a0..0000000000000000000000000000000000000000 --- a/spaces/yuszeying/textgenerator/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/gpt2").launch() \ No newline at end of file diff --git a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/utils.py b/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/utils.py deleted file mode 100644 index 326a6ef8c231dc5fe6b90c3efc44c86247a5f2d1..0000000000000000000000000000000000000000 --- a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/utils.py +++ /dev/null @@ -1,543 +0,0 @@ -import os -import glob -import re -import sys -import argparse -import logging -import json -import subprocess -import warnings -import random -import functools - -import librosa -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from modules.commons import sequence_mask -from hubert import hubert_model - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -# def normalize_f0(f0, random_scale=True): -# f0_norm = f0.clone() # create a copy of the input Tensor -# batch_size, _, frame_length = f0_norm.shape -# for i in range(batch_size): -# means = torch.mean(f0_norm[i, 0, :]) -# if random_scale: -# factor = random.uniform(0.8, 1.2) -# else: -# factor = 1 -# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor -# return f0_norm -# def normalize_f0(f0, random_scale=True): -# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True) -# if random_scale: -# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device) -# else: -# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device) -# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) -# return f0_norm - -def deprecated(func): - """This is a decorator which can be used to mark functions - as deprecated. It will result in a warning being emitted - when the function is used.""" - @functools.wraps(func) - def new_func(*args, **kwargs): - warnings.simplefilter('always', DeprecationWarning) # turn off filter - warnings.warn("Call to deprecated function {}.".format(func.__name__), - category=DeprecationWarning, - stacklevel=2) - warnings.simplefilter('default', DeprecationWarning) # reset filter - return func(*args, **kwargs) - return new_func - -def normalize_f0(f0, x_mask, uv, random_scale=True): - # calculate means based on x_mask - uv_sum = torch.sum(uv, dim=1, keepdim=True) - uv_sum[uv_sum == 0] = 9999 - means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum - - if random_scale: - factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device) - else: - factor = torch.ones(f0.shape[0], 1).to(f0.device) - # normalize f0 based on means and factor - f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) - if torch.isnan(f0_norm).any(): - exit(0) - return f0_norm * x_mask - -def compute_f0_uv_torchcrepe(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512,device=None,cr_threshold=0.05): - from modules.crepe import CrepePitchExtractor - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - - f0_min = 50 - f0_max = 1100 - F0Creper = CrepePitchExtractor(hop_length=hop_length,f0_min=f0_min,f0_max=f0_max,device=device,threshold=cr_threshold) - f0,uv = F0Creper(x[None,:].float(),sampling_rate,pad_to=p_len) - return f0,uv - -def plot_data_to_numpy(x, y): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - plt.plot(x) - plt.plot(y) - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - - -def interpolate_f0(f0): - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # this may not be necessary - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - -def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import parselmouth - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - time_step = hop_length / sampling_rate * 1000 - f0_min = 50 - f0_max = 1100 - f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - return f0 - -def resize_f0(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import pyworld - if p_len is None: - p_len = wav_numpy.shape[0]//hop_length - f0, t = pyworld.dio( - wav_numpy.astype(np.double), - fs=sampling_rate, - f0_ceil=800, - frame_period=1000 * hop_length / sampling_rate, - ) - f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return resize_f0(f0, p_len) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).int() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(): - vec_path = "hubert/checkpoint_best_legacy_500.pt" - print("load model(s) from {}".format(vec_path)) - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - model = models[0] - model.eval() - return model - -def get_hubert_content(hmodel, wav_16k_tensor): - feats = wav_16k_tensor - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav_16k_tensor.device), - "padding_mask": padding_mask.to(wav_16k_tensor.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = hmodel.extract_features(**inputs) - feats = hmodel.final_proj(logits[0]) - return feats.transpose(1, 2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "dec" in k or "disc" in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -def repeat_expand_2d(content, target_len): - # content : [h, t] - - src_len = content.shape[-1] - target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device) - temp = torch.arange(src_len+1) * target_len / src_len - current_pos = 0 - for i in range(target_len): - if i < temp[current_pos+1]: - target[:, i] = content[:, current_pos] - else: - current_pos += 1 - target[:, i] = content[:, current_pos] - - return target - - -def mix_model(model_paths,mix_rate,mode): - mix_rate = torch.FloatTensor(mix_rate)/100 - model_tem = torch.load(model_paths[0]) - models = [torch.load(path)["model"] for path in model_paths] - if mode == 0: - mix_rate = F.softmax(mix_rate,dim=0) - for k in model_tem["model"].keys(): - model_tem["model"][k] = torch.zeros_like(model_tem["model"][k]) - for i,model in enumerate(models): - model_tem["model"][k] += model[k]*mix_rate[i] - torch.save(model_tem,os.path.join(os.path.curdir,"output.pth")) - return os.path.join(os.path.curdir,"output.pth") - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/zhan66/vits-simple-api/vits/bert/ProsodyModel.py b/spaces/zhan66/vits-simple-api/vits/bert/ProsodyModel.py deleted file mode 100644 index 5f305b41894a4a8cec05c23dcdd29a9b939b748b..0000000000000000000000000000000000000000 --- a/spaces/zhan66/vits-simple-api/vits/bert/ProsodyModel.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import torch -import torch.nn as nn -import torch.nn.functional as F - -from transformers import BertModel, BertConfig, BertTokenizer - - -class CharEmbedding(nn.Module): - def __init__(self, model_dir): - super().__init__() - self.tokenizer = BertTokenizer.from_pretrained(model_dir) - self.bert_config = BertConfig.from_pretrained(model_dir) - self.hidden_size = self.bert_config.hidden_size - self.bert = BertModel(self.bert_config) - self.proj = nn.Linear(self.hidden_size, 256) - self.linear = nn.Linear(256, 3) - - def text2Token(self, text): - token = self.tokenizer.tokenize(text) - txtid = self.tokenizer.convert_tokens_to_ids(token) - return txtid - - def forward(self, inputs_ids, inputs_masks, tokens_type_ids): - out_seq = self.bert(input_ids=inputs_ids, - attention_mask=inputs_masks, - token_type_ids=tokens_type_ids)[0] - out_seq = self.proj(out_seq) - return out_seq - - -class TTSProsody(object): - def __init__(self, path, device): - self.device = device - self.char_model = CharEmbedding(path) - self.char_model.load_state_dict( - torch.load( - os.path.join(path, 'prosody_model.pt'), - map_location="cpu" - ), - strict=False - ) - self.char_model.eval() - self.char_model.to(self.device) - - def get_char_embeds(self, text): - input_ids = self.char_model.text2Token(text) - input_masks = [1] * len(input_ids) - type_ids = [0] * len(input_ids) - input_ids = torch.LongTensor([input_ids]).to(self.device) - input_masks = torch.LongTensor([input_masks]).to(self.device) - type_ids = torch.LongTensor([type_ids]).to(self.device) - - with torch.no_grad(): - char_embeds = self.char_model( - input_ids, input_masks, type_ids).squeeze(0).cpu() - return char_embeds - - def expand_for_phone(self, char_embeds, length): # length of phones for char - assert char_embeds.size(0) == len(length) - expand_vecs = list() - for vec, leng in zip(char_embeds, length): - vec = vec.expand(leng, -1) - expand_vecs.append(vec) - expand_embeds = torch.cat(expand_vecs, 0) - assert expand_embeds.size(0) == sum(length) - return expand_embeds.numpy() - - -if __name__ == "__main__": - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - prosody = TTSProsody('./bert/', device) - while True: - text = input("请输入文本:") - prosody.get_char_embeds(text) diff --git a/spaces/zhanghaohui/szu-gpt-academic/Dockerfile b/spaces/zhanghaohui/szu-gpt-academic/Dockerfile deleted file mode 100644 index 97ad13d964d051e4bfdd255a668c209120b1ada4..0000000000000000000000000000000000000000 --- a/spaces/zhanghaohui/szu-gpt-academic/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM -# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic . -# 如何运行: docker run --rm -it --net=host gpt-academic -FROM python:3.11 - -RUN echo '[global]' > /etc/pip.conf && \ - echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \ - echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf - - -WORKDIR /gpt - - - - -# 安装依赖 -COPY requirements.txt ./ -COPY ./docs/gradio-3.32.2-py3-none-any.whl ./docs/gradio-3.32.2-py3-none-any.whl -RUN pip3 install -r requirements.txt -# 装载项目文件 -COPY . . -RUN pip3 install -r requirements.txt - -# 可选步骤,用于预热模块 -RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' - -# 启动 -CMD ["python3", "-u", "main.py"] diff --git a/spaces/zhanghaohui/szu-gpt-academic/request_llm/README.md b/spaces/zhanghaohui/szu-gpt-academic/request_llm/README.md deleted file mode 100644 index 545bc1ffba8b79a49d994cfedcc2a787475181b2..0000000000000000000000000000000000000000 --- a/spaces/zhanghaohui/szu-gpt-academic/request_llm/README.md +++ /dev/null @@ -1,79 +0,0 @@ -# 如何使用其他大语言模型 - -## ChatGLM - -- 安装依赖 `pip install -r request_llm/requirements_chatglm.txt` -- 修改配置,在config.py中将LLM_MODEL的值改为"chatglm" - -``` sh -LLM_MODEL = "chatglm" -``` -- 运行! -``` sh -`python main.py` -``` - -## Claude-Stack - -- 请参考此教程获取 https://zhuanlan.zhihu.com/p/627485689 - - 1、SLACK_CLAUDE_BOT_ID - - 2、SLACK_CLAUDE_USER_TOKEN - -- 把token加入config.py - -## Newbing - -- 使用cookie editor获取cookie(json) -- 把cookie(json)加入config.py (NEWBING_COOKIES) - -## Moss -- 使用docker-compose - -## RWKV -- 使用docker-compose - -## LLAMA -- 使用docker-compose - -## 盘古 -- 使用docker-compose - - ---- -## Text-Generation-UI (TGUI,调试中,暂不可用) - -### 1. 部署TGUI -``` sh -# 1 下载模型 -git clone https://github.com/oobabooga/text-generation-webui.git -# 2 这个仓库的最新代码有问题,回滚到几周之前 -git reset --hard fcda3f87767e642d1c0411776e549e1d3894843d -# 3 切换路径 -cd text-generation-webui -# 4 安装text-generation的额外依赖 -pip install accelerate bitsandbytes flexgen gradio llamacpp markdown numpy peft requests rwkv safetensors sentencepiece tqdm datasets git+https://github.com/huggingface/transformers -# 5 下载模型 -python download-model.py facebook/galactica-1.3b -# 其他可选如 facebook/opt-1.3b -# facebook/galactica-1.3b -# facebook/galactica-6.7b -# facebook/galactica-120b -# facebook/pygmalion-1.3b 等 -# 详情见 https://github.com/oobabooga/text-generation-webui - -# 6 启动text-generation -python server.py --cpu --listen --listen-port 7865 --model facebook_galactica-1.3b -``` - -### 2. 修改config.py - -``` sh -# LLM_MODEL格式: tgui:[模型]@[ws地址]:[ws端口] , 端口要和上面给定的端口一致 -LLM_MODEL = "tgui:galactica-1.3b@localhost:7860" -``` - -### 3. 运行! -``` sh -cd chatgpt-academic -python main.py -``` diff --git a/spaces/zhangyd/bingo/src/components/turn-counter.tsx b/spaces/zhangyd/bingo/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
        -
        - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
        -
        -
        - ) -} diff --git a/spaces/zhoupin30/zhoupin30/src/state/index.ts b/spaces/zhoupin30/zhoupin30/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/zideliu/styledrop/timm/scheduler/plateau_lr.py b/spaces/zideliu/styledrop/timm/scheduler/plateau_lr.py deleted file mode 100644 index 4f2cacb65a1bf23d10aa6fd296f74579571043cf..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/scheduler/plateau_lr.py +++ /dev/null @@ -1,113 +0,0 @@ -""" Plateau Scheduler - -Adapts PyTorch plateau scheduler and allows application of noise, warmup. - -Hacked together by / Copyright 2020 Ross Wightman -""" -import torch - -from .scheduler import Scheduler - - -class PlateauLRScheduler(Scheduler): - """Decay the LR by a factor every time the validation loss plateaus.""" - - def __init__(self, - optimizer, - decay_rate=0.1, - patience_t=10, - verbose=True, - threshold=1e-4, - cooldown_t=0, - warmup_t=0, - warmup_lr_init=0, - lr_min=0, - mode='max', - noise_range_t=None, - noise_type='normal', - noise_pct=0.67, - noise_std=1.0, - noise_seed=None, - initialize=True, - ): - super().__init__(optimizer, 'lr', initialize=initialize) - - self.lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( - self.optimizer, - patience=patience_t, - factor=decay_rate, - verbose=verbose, - threshold=threshold, - cooldown=cooldown_t, - mode=mode, - min_lr=lr_min - ) - - self.noise_range = noise_range_t - self.noise_pct = noise_pct - self.noise_type = noise_type - self.noise_std = noise_std - self.noise_seed = noise_seed if noise_seed is not None else 42 - self.warmup_t = warmup_t - self.warmup_lr_init = warmup_lr_init - if self.warmup_t: - self.warmup_steps = [(v - warmup_lr_init) / self.warmup_t for v in self.base_values] - super().update_groups(self.warmup_lr_init) - else: - self.warmup_steps = [1 for _ in self.base_values] - self.restore_lr = None - - def state_dict(self): - return { - 'best': self.lr_scheduler.best, - 'last_epoch': self.lr_scheduler.last_epoch, - } - - def load_state_dict(self, state_dict): - self.lr_scheduler.best = state_dict['best'] - if 'last_epoch' in state_dict: - self.lr_scheduler.last_epoch = state_dict['last_epoch'] - - # override the base class step fn completely - def step(self, epoch, metric=None): - if epoch <= self.warmup_t: - lrs = [self.warmup_lr_init + epoch * s for s in self.warmup_steps] - super().update_groups(lrs) - else: - if self.restore_lr is not None: - # restore actual LR from before our last noise perturbation before stepping base - for i, param_group in enumerate(self.optimizer.param_groups): - param_group['lr'] = self.restore_lr[i] - self.restore_lr = None - - self.lr_scheduler.step(metric, epoch) # step the base scheduler - - if self.noise_range is not None: - if isinstance(self.noise_range, (list, tuple)): - apply_noise = self.noise_range[0] <= epoch < self.noise_range[1] - else: - apply_noise = epoch >= self.noise_range - if apply_noise: - self._apply_noise(epoch) - - def _apply_noise(self, epoch): - g = torch.Generator() - g.manual_seed(self.noise_seed + epoch) - if self.noise_type == 'normal': - while True: - # resample if noise out of percent limit, brute force but shouldn't spin much - noise = torch.randn(1, generator=g).item() - if abs(noise) < self.noise_pct: - break - else: - noise = 2 * (torch.rand(1, generator=g).item() - 0.5) * self.noise_pct - - # apply the noise on top of previous LR, cache the old value so we can restore for normal - # stepping of base scheduler - restore_lr = [] - for i, param_group in enumerate(self.optimizer.param_groups): - old_lr = float(param_group['lr']) - restore_lr.append(old_lr) - new_lr = old_lr + old_lr * noise - param_group['lr'] = new_lr - self.restore_lr = restore_lr