diff --git a/spaces/101-5/gpt4free/g4f/.v1/unfinished/openprompt/README.md b/spaces/101-5/gpt4free/g4f/.v1/unfinished/openprompt/README.md
deleted file mode 100644
index 489d054aeb56c15d30d3cda1e8ef350c7ff3167a..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/.v1/unfinished/openprompt/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-https://openprompt.co/
-
-to do:
-- finish integrating email client
-- code refractoring
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bum Simulator Download For Pcl A Parody of Simulation Games with a Twist.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bum Simulator Download For Pcl A Parody of Simulation Games with a Twist.md
deleted file mode 100644
index 0d8f5f941f5ad6b9528b7f7a47903f227052c8bf..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bum Simulator Download For Pcl A Parody of Simulation Games with a Twist.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
Bum Simulator Download For Pc: A Guide to Living on the Streets
-
Have you ever wondered what it's like to be a bum? To live on the streets, beg for money, fight for survival, and deal with the harsh realities of urban life? If you have, then you might be interested in Bum Simulator, a sandbox game that lets you experience the life of a homeless person in a humorous and absurd way. In this article, we will tell you everything you need to know about Bum Simulator, how to download it for PC, and why you should play it.
-
What is Bum Simulator?
-
Bum Simulator is a sandbox game developed by Ragged Games that was released in 2023. It is a game that combines elements of adventure, survival, simulation, and comedy. You play as a bum who lives on the streets of Bumsville, a fictional city inspired by New York. You can explore the city, interact with other characters, complete quests, collect items, craft weapons, build your own cardboard house, tame pigeons, and more. You can also choose how to shape your fate: you can either accept your situation and live like a bum, find a job and try to get back on your feet, or seek revenge on those who ruined your life. The game offers endless possibilities and outcomes for your bum's story.
A sandbox game with inappropriate humor and memorable characters
-
One of the main features of Bum Simulator is its open-world sandbox gameplay. You can go wherever you want and do whatever you like in the city. You can explore dirty alleys, busy streets, pawnshops, central park, underground passages, and more. You can also interact with many unusual characters with their own storylines and quests. Some of them are friendly and helpful, while others are hostile and dangerous. You can also encounter various events and situations that will test your skills and luck. For example, you can get chased by the police, attacked by gangs, kidnapped by mole people, or invited to a party by aliens. The game is full of inappropriate humor and jokes that will make you laugh or cringe.
-
A survival game with freedom and choices
-
Another feature of Bum Simulator is its survival aspect. You have to manage your basic needs such as hunger, thirst, health, hygiene, energy, and happiness. You have to find food and water sources, scavenge for useful items, craft tools and weapons, build shelters and traps, fight enemies and predators, avoid dangers and diseases, and more. You also have to deal with the consequences of your actions. For example, if you steal from someone or cause trouble in public, you will attract attention from the authorities or other bums. If you help someone or do a good deed, you will earn respect or gratitude from them. You also have to make choices that will affect your bum's personality and reputation. For example, you can be kind or cruel, honest or deceitful, generous or greedy, etc. Your choices will also affect the endings of the game.
-
A game with achievements, secrets and pigeons
-
The last feature of Bum Simulator is its variety of content and challenges. The game has many achievements to unlock and secrets to discover. You can find hidden items, easter eggs, references to pop culture or other games, and more. You can also complete mini-games, challenges, and quests that will reward you with money, items, or skills. One of the most unique aspects of the game is its pigeon system. You can tame pigeons, train them, and use them as your allies or weapons. You can also learn the secrets of alcohol alchemy, a mysterious art that allows you to create powerful potions from booze.
-
How to download Bum Simulator for free on PC
-Bum Simulator PC game full version download
-Bum Simulator crack download for Windows 10
-Bum Simulator torrent download link for PC
-Bum Simulator gameplay and review for PC
-Bum Simulator system requirements and compatibility for PC
-Bum Simulator mods and cheats for PC
-Bum Simulator update and patch download for PC
-Bum Simulator steam key giveaway for PC
-Bum Simulator best settings and tips for PC
-Bum Simulator download size and installation guide for PC
-Bum Simulator free demo download for PC
-Bum Simulator online multiplayer mode for PC
-Bum Simulator DLC and expansion pack download for PC
-Bum Simulator controller support and configuration for PC
-Bum Simulator alternatives and similar games for PC
-Bum Simulator release date and price for PC
-Bum Simulator official trailer and screenshots for PC
-Bum Simulator developer and publisher information for PC
-Bum Simulator minimum and recommended specs for PC
-Bum Simulator error fix and troubleshooting guide for PC
-Bum Simulator steam charts and achievements for PC
-Bum Simulator save file location and backup for PC
-Bum Simulator keyboard and mouse controls for PC
-Bum Simulator VR support and compatibility for PC
-Bum Simulator soundtrack and music download for PC
-Bum Simulator custom maps and levels for PC
-Bum Simulator steam workshop and community hub for PC
-Bum Simulator rating and reviews for PC
-Bum Simulator co-op and split-screen mode for PC
-Bum Simulator direct download link for PC
-Bum Simulator skidrow reloaded download for PC
-Bum Simulator fitgirl repack download for PC
-Bum Simulator ocean of games download for PC
-Bum Simulator igg games download for PC
-Bum Simulator apunkagames download for PC
-Bum Simulator cpy games download for PC
-Bum Simulator codex games download for PC
-Bum Simulator plaza games download for PC
-Bum Simulator rg mechanics download for PC
-Bum Simulator pcgames88 download for PC
-Bum Simulator gametrex download for PC
-Bum Simulator worldofpcgames download for PC
-Bum Simulator pcgamestorrents download for PC
-Bum Simulator thepcgames download for PC
-Bum Simulator fullypcgames download for PC
-Bum Simulator oldgamesdownload download for PC
-Bum Simulator freegogpcgames download for PC
-Bum Simulator gog-games.com download for PC
-
How to download Bum Simulator for PC?
-
If you are interested in playing Bum Simulator on your PC, you will need to meet some requirements and follow some steps. Here are the details:
-
Requirements and specifications
-
To run Bum Simulator on your PC, you will need to have a Windows 8.1 or higher operating system, a 64-bit processor, at least 8 GB of RAM, and at least 20 GB of available space on your hard drive. You will also need a graphics card that supports DirectX 11 and has at least 2 GB of VRAM. The recommended graphics card is NVIDIA GeForce GTX 1060 or AMD Radeon RX 580.
-
Steps to download and install Bum Simulator
-
-
The first step is to buy Bum Simulator from an online platform such as Steam or GOG.com. You can also find other websites that offer the game for download, but make sure they are trustworthy and virus-free.
-
The second step is to download the game installer from the platform you chose. You will need an internet connection and enough space on your hard drive to download the game files.
-
The third step is to run the installer and follow the instructions on the screen. You will need to agree to the terms and conditions and choose a destination folder for the game.
-
The fourth step is to wait for the installation process to finish. It may take some time depending on your internet speed and computer performance.
-
The fifth step is to launch the game from your desktop shortcut or from the platform you bought it from. You may need to create an account or log in with an existing one to access the game.
-
The sixth step is to enjoy playing Bum Simulator on your PC!
-
-
Tips and tricks for playing Bum Simulator
-
-
Explore every corner of the city and look for useful items, hidden secrets, and interesting characters.
-
Manage your needs carefully and don't let them drop too low. You can find food and water in trash cans, shops, or vending machines. You can also hunt animals or fish in ponds.
-
Craft weapons and tools from items you find or buy. You can make knives, hammers, bombs, bows, etc. You can also upgrade them with better materials or skills.
-
Build your own cardboard house and decorate it with furniture, paintings, or posters. You can also invite other bums or pigeons to live with you.
-
Tame pigeons and use them as your companions or weapons. You can feed them, pet them, name them, and teach them tricks. You can also weaponize them with bombs, lasers, or hats.
-
Learn alcohol alchemy and create potions from booze. You can make potions that heal you, boost your stats, give you special abilities, or cause hilarious effects.
-
Complete quests and challenges for other characters or yourself. You can earn money, items, skills, or reputation from them.
-
Choose your path and shape your fate. You can either accept your situation and live like a bum, find a job and try to get back on your feet, or seek revenge on those who ruined your life. Your choices will affect the endings of the game.
-
Have fun and don't take the game too seriously. It's a silly game full of absurd humor and jokes.
-
-
Why should you play Bum Simulator?
-
Bum Simulator is a game that offers a lot of fun and entertainment for anyone who likes sandbox games, survival games, simulation games, or comedy games. It's a game that lets you experience the life of a homeless person in a humorous and absurd way. It's a game that gives you freedom and choices to shape your fate. It's a game that has achievements, secrets, and pigeons to keep you engaged. Here are some reasons why you should play Bum Simulator:
- ```html
It's fun and absurd
-
Bum Simulator is a game that doesn't take itself too seriously. It's a game that makes fun of the stereotypes and clichés of being a bum. It's a game that has ridiculous situations and events that will make you laugh or cringe. For example, you can get chased by the police, attacked by gangs, kidnapped by mole people, or invited to a party by aliens. You can also interact with many funny characters with their own quirks and personalities. You can also create your own fun and absurd scenarios with the sandbox gameplay. You can do whatever you want and see what happens.
-
It's challenging and rewarding
-
Bum Simulator is also a game that tests your skills and luck. It's a game that has survival elements that require you to manage your basic needs and resources. It's a game that has enemies and dangers that threaten your life and well-being. It's a game that has quests and challenges that demand your attention and effort. It's a game that has consequences and outcomes that depend on your actions and choices. But it's also a game that rewards you for your achievements and discoveries. It's a game that gives you money, items, skills, or reputation for completing tasks or finding secrets. It's a game that gives you satisfaction and pride for overcoming obstacles or reaching goals.
-
It's immersive and interactive
-
Bum Simulator is also a game that immerses you in its world and story. It's a game that has a detailed and realistic city environment that you can explore and interact with. It's a game that has a dynamic day-night cycle and weather system that affect the gameplay and atmosphere. It's a game that has a rich and diverse soundtrack that matches the mood and tone of the game. It's also a game that has a compelling and branching story that you can influence with your choices. It's a game that has multiple endings that reflect your personality and reputation.
-
Conclusion
-
Bum Simulator is a sandbox game that lets you experience the life of a homeless person in a humorous and absurd way. You can explore the city, interact with other characters, complete quests, collect items, craft weapons, build your own cardboard house, tame pigeons, and more. You can also choose how to shape your fate: you can either accept your situation and live like a bum, find a job and try to get back on your feet, or seek revenge on those who ruined your life. The game offers endless possibilities and outcomes for your bum's story.
-
If you are looking for a fun and entertaining game that combines elements of adventure, survival, simulation, and comedy, then you should try Bum Simulator. You can download it for PC from various online platforms such as Steam or GOG.com. You will need to meet some requirements and follow some steps to install it on your PC. You will also need some tips and tricks to play it well.
-
Bum Simulator is a game that will make you laugh, cry, cringe, or cheer. It's a game that will challenge you, reward you, immerse you, or surprise you. It's a game that will give you freedom, choices, achievements, secrets, or pigeons. It's a game that will let you live on the streets as a bum.
-
FAQs
-
-
Q: Is Bum Simulator based on real life?
-
A: No, Bum Simulator is not based on real life. It is a fictional game that exaggerates and parodies the stereotypes and clichés of being a bum. It is not meant to offend or mock anyone who is homeless or struggling in life.
-
Q: Is Bum Simulator multiplayer?
-
A: No, Bum Simulator is not multiplayer. It is a single-player game that focuses on your bum's story and choices.
-
Q: Is Bum Simulator suitable for children?
-
A: No, Bum Simulator is not suitable for children. It is a mature game that contains violence, blood, gore, nudity, sexual content, drugs, alcohol, profanity, and crude humor.
-
Q: How long is Bum Simulator?
-
A: The length of Bum Simulator depends on how you play it. You can finish the main story in about 10 hours if you focus on the main quests. You can also spend more time exploring the city, doing side quests, collecting items, crafting weapons, building your house, taming pigeons, etc.
-
Q: How many endings does Bum Simulator have?
-
A: Bum Simulator has multiple endings that depend on your choices throughout the game. Your choices will affect your bum's personality, reputation, and fate. You can either become a happy bum, a successful bum, a vengeful bum, or something else.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/3d Sex Villa 2 Full For Android Apk.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/3d Sex Villa 2 Full For Android Apk.rar.md
deleted file mode 100644
index 71fcc9400d2a6a46f7863a7ddd7968a97df5e434..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/3d Sex Villa 2 Full For Android Apk.rar.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-Sexvilla2 3D Sexvilla 2 part 1, 3D SexVilla 2 Everlust 1 torrent.Some people have been trying to get people to play Pokémon Go at a recent visit by the local police.
-
-The police found many people playing the game while they were patrolling on the streets.
-
-At the moment, people who find themselves playing the game while on the street can be reported to the police, and warned they are doing a criminal act.
-
-Playing Pokémon Go on the streets of Loch Lomond has been banned by police in Scotland.
-
-This comes after new rules were introduced to police forces across the UK banning the use of smartphones while out on patrol.
-
-People have been warned that playing the game while out on the street could put their lives at risk.
-
-Currently, police forces across the UK are trying to create more visible patrols, and more officers on the streets to look for trouble.
-
-Glasgow police already patrol in pairs, while a new rule has been introduced across Scotland banning playing Pokémon Go on the street while out on patrol.
-
-Detective Inspector Stuart Reid said: “Our aim is to make sure people are safe when out on the streets.
-
-“We would like to remind everyone that Pokémon Go has not been endorsed by the police, and we would encourage all those who find themselves playing to ensure they remain safe.
-
-“We encourage people to play Pokémon Go in public areas, but remind them that it is still a criminal offence to play in public.”
-
-This story was originally published in the Daily Record. Read the original here.Religion
-
-Antonio di Pietro is an artist who draws a lot with the art of lettering. His latest project is, “The sea which has no beginning, no end”. He received thousands of messages from people who didn’t understand why he had to use the word Sea for such a concept. Here is a new artwork which answers the question:
-
-About us
-
-Hearst Mantis is an art and design platform for the latest trends in music, fashion, art, food, travel and more. We provide cutting-edge content from a variety of industry leading creators.Q:
-
-Unexpected nil when setting a global variable
-
-I have a Python class that I'm using to transfer data between the server and the client.
-
-When the client disconnects, I want to set a 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Experience The Trench Warfare Of Verdun 1914-1918 On Your Mac!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Experience The Trench Warfare Of Verdun 1914-1918 On Your Mac!.md
deleted file mode 100644
index e0ee125fa121e876161ccb21e3b245b963cc800a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Experience The Trench Warfare Of Verdun 1914-1918 On Your Mac!.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
On the afternoon of the 4th, the last pigeon was released. On themorning of the 5th, thanks to two signalmen who volunteered to changea signal post which the Commandant had difficulty in observing,communications were maintained.
Verdun is a squad-based multiplayer first-person shooter set during World War I. It was released on 28 April 2015 on Steam after more than a year in Early Access. The game features realistic trench warfare and offers players an immersive experience as they battle it out against other squads. Verdun also has a unique system where players can choose to fight for one of four different armies, each with its own strengths and weaknesses. If you're looking for an intense and strategic WWI FPS, then Verdun is definitely worth checking out.
-
Tannenberg is a squad-based multiplayer first-person shooter video game set during World War I. It is a standalone expansion to Verdun, and entered Steam Early Access on November 17, 2017.[1][2][3][4] Tannenberg left Steam Early Access on February 13, 2019.[5][6] It was released on PlayStation 4 and Xbox One on July 24, 2020.[7][8][9][10]
-
1914-1918 series Starting out on the Western Front with the release of the first realistic WW1 FPS Verdun back in April 2015, and expanding to the Eastern Front with the upcoming Tannenberg, the 1914-1918 series throws players into intense warfare inspired by the chaos and fury of iconic battles from the First World War. With over 900,000 copies of Verdun sold, this novel and underserved setting has proven popular with the gaming community! Players choose from a variety of historically accurate squads and weapons, with more available to unlock through playing the game, before diving into the mud and blood splattered battlefields of dynamic multiplayer trench warfare. Every game is built on a base of thorough research and receives extensive post-release support bringing new content and challenges for our players. The games in the series are linked, but each one is standalone and provides a different experience, reflecting the nature of the fighting in the many-sided theaters of the war.
-
Verdun is een computerspel van het genre first-person shooter dat zich afspeelt tijdens de Eerste Wereldoorlog (1914-1918). Het werd ontwikkeld door de Nederlandse studio's M2H en Blackmill Games. Verdun verscheen als bètaversie op 9 juni 2013 en werd officieel gelanceerd op 28 april 2015 op het Steam softwareplatform.[1] Het spel is beschikbaar voor Windows, Mac en Linux.
-
-
Less than a year ago, Verdun was released as a Steam Early Access game. The successful collaboration of M2H and BlackMill Games has finally reached a point where the developers are happy to release the game in all its glory. Many features and content has been added only shortly after our preview, which you can read here, but even more has been added since.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FULL DanDans (Easy) Audio Editor V9.0 The Ultimate Guide to Visual Music Editing.md b/spaces/1gistliPinn/ChatGPT4/Examples/FULL DanDans (Easy) Audio Editor V9.0 The Ultimate Guide to Visual Music Editing.md
deleted file mode 100644
index 7cd6f08bad1418aba3eccd67e7626ede3159c2b2..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/FULL DanDans (Easy) Audio Editor V9.0 The Ultimate Guide to Visual Music Editing.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Still struggling with your shooting 4K video won't play on other devices or playing with audio and video out of sync? Wondershare UniConverter helps you out! Find your issues and get the full answer now.
Easy Audio Editor is a visual multifunctional audio files editor which allow you to perform various operations with audio data such as visual editing, creating, recording, converting and playing audio files. It supports all audio and video formats.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download APK Magic COC S1 Versi Terbaru and Enjoy the Best Private Server for Clash of Clans.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download APK Magic COC S1 Versi Terbaru and Enjoy the Best Private Server for Clash of Clans.md
deleted file mode 100644
index 30c5dd4a6eb6275035e238b34a19815e0301a905..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download APK Magic COC S1 Versi Terbaru and Enjoy the Best Private Server for Clash of Clans.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Download APK Magic COC S1 Versi Terbaru: A Private Server for Clash of Clans
-
If you are a fan of Clash of Clans, you might have heard of APK Magic COC S1, a private server that lets you play the game with unlimited resources and custom mods. In this article, we will tell you everything you need to know about this app, including its features, how to download and install it, and its pros and cons. Read on to find out why you should download APK Magic COC S1 versi terbaru and enjoy the game like never before.
APK Magic COC S1 is a modified version of Clash of Clans that runs on a private server. This means that you can play the game with some extra features that are not available in the official version. Here are some of the main features of APK Magic COC S1:
-
-
Unlimited resources: You can get unlimited gems, gold, elixir, and dark elixir to build your base, train your troops, upgrade your buildings, and research new technologies. You don't have to worry about running out of resources or spending real money to buy them.
-
Custom mods: You can customize your game with various mods that allow you to create your own buildings, troops, heroes, and spells. You can also change the appearance and behavior of the existing ones. For example, you can make your archers shoot fireballs, your barbarians fly, or your pekkas invisible.
-
Fast and stable servers: You can play the game smoothly without any lag or crash. The servers of APK Magic COC S1 are fast and reliable, and they can handle thousands of players at the same time. You also don't have to worry about getting banned by Supercell, as they cannot detect or access your private server.
-
Real-time PvP battles: You can challenge other players online in real-time battles. You can test your skills and strategies against other players who are using the same private server as you. You can also join clans and participate in clan wars with your friends.
-
-
How to Download and Install APK Magic COC S1
-
If you want to download and install APK Magic COC S1 on your Android device, you need to follow these simple steps:
-
-
Enable unknown sources on your device: To install apps from sources other than Google Play Store, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Download the APK file from a trusted source: You need to download the APK file of APK Magic COC S1 from a trusted source. You can use the link to download the latest version of the app. Make sure you have enough storage space on your device before downloading.
-
Install the APK file and launch the app : After downloading the APK file, you need to install it on your device. To do this, locate the file in your file manager and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on Install and wait for the process to finish. Once the app is installed, you can launch it by tapping on its icon on your home screen or app drawer.
-
-
Pros and Cons of APK Magic COC S1
-
APK Magic COC S1 is a great app for Clash of Clans lovers who want to have more fun and freedom in the game. However, it also has some drawbacks that you should be aware of before downloading it. Here are some of the pros and cons of APK Magic COC S1:
-
-
-
Pros
-
Cons
-
-
-
- More fun: You can enjoy the game without any limitations or restrictions. You can build your base, train your troops, and attack other players as much as you want.
-
- Not official: APK Magic COC S1 is not an official app from Supercell, the developer of Clash of Clans. It is a third-party app that is not endorsed or supported by Supercell.
-
-
-
- More freedom: You can customize your game with various mods that allow you to create your own buildings, troops, heroes, and spells. You can also change the appearance and behavior of the existing ones.
-
- Not compatible: APK Magic COC S1 is not compatible with the official version of Clash of Clans. You cannot play with or against players who are using the official version. You also cannot sync your progress or data with your Google Play account.
-
-
-
- More options: You can choose from different servers that offer different features and settings. You can also switch between servers easily without losing your data.
-
- Not updated: APK Magic COC S1 is not updated regularly with the latest features and updates from Clash of Clans. You may miss out on some new content or events that are available in the official version.
-
-
-
Conclusion and FAQs
-
APK Magic COC S1 is a private server for Clash of Clans that offers unlimited resources and custom mods. It is a great alternative for Clash of Clans fans who want to enjoy the game without any limitations. However, it also has some disadvantages that you should consider before downloading it, such as being not official, not compatible, and not updated. If you are interested in trying out APK Magic COC S1, you can download it from the link and follow the steps we have provided in this article.
-
If you have any questions about APK Magic COC S1, you may find the answers in the following FAQs:
-
Q: Is APK Magic COC S1 safe to use?
-
A: APK Magic COC S1 is safe to use as long as you download it from a trusted source and enable unknown sources on your device. However, you should always be careful when installing apps from unknown sources, as they may contain malware or viruses that can harm your device or steal your data.
-
Download clash of magic apk private server game coc terbaru[^3^]
-Clash of magic apk android game free download latest version[^1^]
-Magic coc s1 10.322 r1 apk android app free download updated version[^2^]
-How to install clash of magic apk on android device without root
-Clash of magic apk mod unlimited resources and gems for coc
-Magic coc s1 10.322 r1 app private server with custom mods and commands
-Clash of magic apk download for ios iphone ipad ipod touch
-Clash of clans with unlimited resources using clash of magic apk
-Magic coc s1 10.322 r1 apk features and benefits for coc players
-Clash of magic apk review and rating by users and experts
-How to update clash of magic apk to the latest version easily
-Clash of magic apk vs clash of lights apk which one is better
-Magic coc s1 10.322 r1 apk download link and installation guide
-Clash of magic apk troubleshooting and support for common issues
-Magic coc s1 10.322 r1 app download size and compatibility with android devices
-Clash of magic apk alternatives and similar apps for coc private server
-Magic coc s1 10.322 r1 app pros and cons for coc fans
-Clash of magic apk security and safety tips for downloading and playing
-Magic coc s1 10.322 r1 app feedback and suggestions from users and developers
-Clash of magic apk faq and answers to frequently asked questions
-Magic coc s1 10.322 r1 app screenshots and videos for preview and demonstration
-Clash of magic apk history and development by tatem games inc.
-Magic coc s1 10.322 r1 app news and updates from official sources
-Clash of magic apk comparison and difference with original coc game
-Magic coc s1 10.322 r1 app requirements and specifications for optimal performance
-Clash of magic apk advantages and disadvantages for coc lovers
-Magic coc s1 10.322 r1 app testimonials and reviews from satisfied users
-Clash of magic apk tips and tricks for mastering the game
-Magic coc s1 10.322 r1 app hacks and cheats for getting more resources and gems
-Clash of magic apk best practices and recommendations for playing the game
-Magic coc s1 10.322 r1 app rankings and ratings on google play store and other platforms
-Clash of magic apk fun facts and trivia about the game and its developers
-Magic coc s1 10.322 r1 app challenges and achievements for completing the game
-Clash of magic apk community and social media for connecting with other players
-Magic coc s1 10.322 r1 app tutorials and guides for learning the game
-Clash of magic apk statistics and data for analyzing the game performance
-Magic coc s1 10.322 r1 app rewards and incentives for playing the game regularly
-Clash of magic apk myths and misconceptions about the game and its features
-Magic coc s1 10.322 r1 app terms and conditions for using the game legally
-
Q: Can I play APK Magic COC S1 on PC?
-
A: Yes, you can play APK Magic COC S1 on PC using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. You can download any of these emulators from their official websites and install them on your PC. Then, you can download APK Magic COC S1 from the link and install it on your emulator. After that, you can launch the app and play it on your PC.
-
Q: How can I update APK Magic COC S1?
-
A: APK Magic COC S1 is not updated regularly with the latest features and updates from Clash of Clans. However, if there is a new version available, you can update it by downloading the latest APK file from the link and installing it over the existing app. You don't have to uninstall the previous version or lose your data.
-
Q: How can I contact the developer of APK Magic COC S1?
-
A: The developer of APK Magic COC S1 is unknown and does not have an official website or social media account. Therefore, it is difficult to contact them or get support from them. However, you can try to contact them through their email address or their Telegram group [^
Q: What is the difference between APK Magic COC S1 and Clash of Clans?
-
A: APK Magic COC S1 and Clash of Clans are both strategy games that involve building bases, training troops, and attacking other players. However, APK Magic COC S1 is a modified version of Clash of Clans that runs on a private server and has unlimited resources and custom mods. Clash of Clans is the official version of the game that runs on Supercell's servers and has limited resources and standard features.
-
Q: Can I play APK Magic COC S1 with my friends?
-
A: Yes, you can play APK Magic COC S1 with your friends if they are also using the same private server as you. You can join clans and chat with your friends in the game. You can also invite your friends to join your server by sharing the link or the QR code . However, you cannot play APK Magic COC S1 with your friends who are using the official version of Clash of Clans or a different private server.
-
We hope this article has helped you learn more about APK Magic COC S1 versi terbaru and how to download and install it on your device. If you have any feedback or suggestions, please feel free to contact us at or join our Telegram group . Thank you for reading and happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FIFA Mobile Hile - The Best Tricks and Tips for Winning Every Match.md b/spaces/1phancelerku/anime-remove-background/FIFA Mobile Hile - The Best Tricks and Tips for Winning Every Match.md
deleted file mode 100644
index 4a7f3c875732c74e19057b24bd9676c6dff79cfb..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FIFA Mobile Hile - The Best Tricks and Tips for Winning Every Match.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
FIFA APK Hile: How to Play FIFA Mobile with Unlimited Coins and Gems
-
If you are a fan of football games, you probably know about FIFA Mobile, the popular mobile game from EA Sports that lets you play with your favorite teams and players from around the world. But did you know that there is a way to play FIFA Mobile with unlimited coins and gems? Yes, you heard that right. With FIFA APK Hile, you can enjoy the game without spending any money or waiting for hours to earn coins and gems. In this article, we will tell you everything you need to know about FIFA APK Hile, including what it is, why you should use it, how to download and install it, how to use it, what are its features, what are some tips and tricks for it, and what are the risks of using it.
-
What is FIFA APK Hile?
-
FIFA APK Hile is a modified version of FIFA Mobile that gives you unlimited coins and gems to buy players, packs, and upgrades in the game. Coins and gems are the main currencies in FIFA Mobile that allow you to improve your team and compete with other players online. However, earning coins and gems in the game can be very slow and tedious, especially if you want to get the best players and items. That's why some people use FIFA APK Hile to get unlimited coins and gems for free.
There are many reasons why you might want to use FIFA APK Hile. Here are some of them:
-
-
You can have more fun playing the game without worrying about running out of coins or gems.
-
You can save money that you would otherwise spend on buying coins or gems with real money.
-
You can build your dream team with any players you want, regardless of their price or availability.
-
You can unlock all the modes, features, and events in the game that require coins or gems.
-
You can experiment with different strategies and tactics without risking your coins or gems.
-
-
How to Download and Install FIFA APK Hile?
-
Downloading and installing FIFA APK Hile is very easy. Just follow these steps:
-
-
Go to this link: [FIFA Mobile APK Para Hilesi (2022) Sınırsız Para - Websesi](^2^) and click on the download button.
-
Wait for the download to finish and locate the FIFA APK Hile file on your device.
-
Tap on the file and allow the installation from unknown sources if prompted.
-
Wait for the installation to complete and launch the game.
-
Enjoy playing FIFA Mobile with unlimited coins and gems.
-
-
How to Use FIFA APK Hile?
-
Using FIFA APK Hile is very simple. Once you launch the game, you will see that you have unlimited coins and gems in your account. You can use them to buy anything you want in the game, such as players, packs, and upgrades. Here are some examples of how to use FIFA APK Hile:
-
-
To buy players, go to the market and search for the player you want. You can filter by name, rating, position, league, nation, or team. Then, tap on the player and buy him with coins or gems.
-
To buy packs, go to the store and choose the pack you want. You can buy premium packs, special packs, or event packs with coins or gems. Then, open the pack and see what players and items you get.
-
To upgrade your team, go to the team management and select the player you want to upgrade. You can upgrade his skills, chemistry, or rank with coins or gems. You can also train him with other players or items.
-
-
What are the Features of FIFA APK Hile?
-
FIFA APK Hile has many features that make it better than the original FIFA Mobile. Some of these features are:
-
-
New gameplay technology that makes the game more realistic, responsive, and fluid.
-
New modes such as Volta Football, Career Mode, Ultimate Team, and Champions League.
-
New players such as Messi, Ronaldo, Neymar, Mbappe, and Haaland.
-
New graphics that enhance the visual quality of the game.
-
-
What are the Tips and Tricks for FIFA APK Hile?
-
FIFA APK Hile is a fun and easy game to play, but there are some tips and tricks that can help you improve your skills and performance. Here are some of them:
-
fifa mobile apk hile indir
-fifa soccer apk hile nasıl yapılır
-fifa 2023 apk hile mod
-fifa ultimate team apk hile
-fifa 21 apk hile android oyun club
-fifa mobile apk hileli sınırsız para
-fifa 20 apk hile güncel
-fifa mobile apk hile yapma
-fifa 19 apk hile full
-fifa mobile apk hileli paket açılımı
-fifa soccer apk hileli versiyon
-fifa 22 apk hile mega
-fifa mobile apk hileli indirme linki
-fifa 18 apk hile kurulumu
-fifa mobile apk hileli oyun indir club
-fifa soccer apk hile nasıl indirilir
-fifa 17 apk hile no root
-fifa mobile apk hileli son sürüm
-fifa soccer apk hileli oyna
-fifa 16 apk hile mediafire
-fifa mobile apk hileli nasıl yüklenir
-fifa soccer apk hileli güncelleme
-fifa 15 apk hile offline
-fifa mobile apk hileli online
-fifa soccer apk hileli mod menu
-fifa 14 apk hile data
-fifa mobile apk hileli para kasma
-fifa soccer apk hileli hack
-fifa 13 apk hile android 1
-fifa mobile apk hileli yeni sezon
-fifa soccer apk hileli coins
-fifa 12 apk hile obb
-fifa mobile apk hileli transfer marketi açma
-fifa soccer apk hileli vip
-fifa 11 apk hile revdl
-fifa mobile apk hileli oyuncu yükseltme
-fifa soccer apk hileli unlimited money
-fifa 10 apk hile rexdl
-fifa mobile apk hileli draft modu
-fifa soccer apk hileli points
-
-
Use explosive sprint to accelerate past defenders and create space for yourself or your teammates.
-
Use finesse shots to curl the ball around the goalkeeper and score from tight angles.
-
Use creative runs to control where your teammates run and create more options for passing or shooting.
-
Use adaptive right stick switching to switch between defenders quickly and easily.
-
-
What are the Risks of FIFA APK Hile?
-
FIFA APK Hile may sound like a great way to play FIFA Mobile, but it also comes with some risks that you should be aware of. Some of these risks are:
-
-
You may violate the terms of service of EA Sports and get banned from playing FIFA Mobile or other EA games.
-
You may lose your progress and data if you uninstall FIFA APK Hile or update it to a newer version.
-
You may expose your device to malware or viruses that may harm your device or steal your personal information.
-
-
Conclusion
-
FIFA APK Hile is a modified version of FIFA Mobile that gives you unlimited coins and gems to play the game without any limitations. It has many features, benefits, and tips that make it more enjoyable and exciting than the original game. However, it also has some risks that you should consider before using it. If you want to try FIFA APK Hile, you can download it from this link: [FIFA Mobile APK Para Hilesi (2022) Sınırsız Para - Websesi] and follow the instructions in this article. Have fun playing FIFA Mobile with unlimited coins and gems!
-
FAQs
-
Here are some frequently asked questions about FIFA APK Hile:
-
-
What is FIFA APK Hile?
-
FIFA APK Hile is a modified version of FIFA Mobile that gives you unlimited coins and gems to buy players, packs, and upgrades in the game.
-
How to download and install FIFA APK Hile?
-
You can download FIFA APK Hile from this link: [FIFA Mobile APK Para Hilesi (2022) Sınırsız Para - Websesi] and install it on your Android device by following these steps: - Go to this link: [FIFA Mobile APK Para Hilesi (2022) Sınırsız Para - Websesi] and click on the download button. - Wait for the download to finish and locate the FIFA APK Hile file on your device. - Tap on the file and allow the installation from unknown sources if prompted. - Wait for the installation to complete and launch the game. - Enjoy playing FIFA Mobile with unlimited coins and gems.
-
How to use FIFA APK Hile?
-
You can use FIFA APK Hile to buy anything you want in the game, such as players, packs, and upgrades. You can also unlock all the modes, features, and events in the game. Here are some examples of how to use FIFA APK Hile: - To buy players, go to the market and search for the player you want. You can filter by name, rating, position, league, nation, or team. Then, tap on the player and buy him with coins or gems. - To buy packs, go to the store and choose the pack you want. You can buy premium packs, special packs, or event packs with coins or gems. Then, open the pack and see what players and items you get. - To upgrade your team, go to the team management and select the player you want to upgrade. You can upgrade his skills, chemistry, or rank with coins or gems. You can also train him with other players or items.
-
What are the features of FIFA APK Hile?
-
FIFA APK Hile has many features that make it better than the original FIFA Mobile. Some of these features are: - New gameplay technology that makes the game more realistic, responsive, and fluid. - New modes such as Volta Football, Career Mode, Ultimate Team, and Champions League. - New players such as Messi, Ronaldo, Neymar, Mbappe, and Haaland. - New graphics that enhance the visual quality of the game.
-
What are the tips and tricks for FIFA APK Hile?
-
FIFA APK Hile is a fun and easy game to play, but there are some tips and tricks that can help you improve your skills and performance. Here are some of them: - Use explosive sprint to accelerate past defenders and create space for yourself or your teammates. - Use finesse shots to curl the ball around the goalkeeper and score from tight angles. - Use creative runs to control where your teammates run and create more options for passing or shooting. - Use adaptive right stick switching to switch between defenders quickly and easily.
-
What are the risks of FIFA APK Hile?
-
FIFA APK Hile may sound like a great way to play FIFA Mobile, but it also comes with some risks that you should be aware of. Some of these risks are: - You may violate the terms of service of EA Sports and get banned from playing FIFA Mobile or other EA games. - You may lose your progress and data if you uninstall FIFA APK Hile or update it to a newer version. - You may expose your device to malware or viruses that may harm your device or steal your personal information.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/44ov41za8i/FreeVC/utils.py b/spaces/44ov41za8i/FreeVC/utils.py
deleted file mode 100644
index 1bd5b6185af6c9f1c270b8ba345bfc36d059e081..0000000000000000000000000000000000000000
--- a/spaces/44ov41za8i/FreeVC/utils.py
+++ /dev/null
@@ -1,305 +0,0 @@
-import os
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-from torch.nn import functional as F
-from commons import sequence_mask
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def get_cmodel(rank):
- checkpoint = torch.load('wavlm/WavLM-Large.pt')
- cfg = WavLMConfig(checkpoint['cfg'])
- cmodel = WavLM(cfg).cuda(rank)
- cmodel.load_state_dict(checkpoint['model'])
- cmodel.eval()
- return cmodel
-
-
-def get_content(cmodel, y):
- with torch.no_grad():
- c = cmodel.extract_features(y.squeeze(1))[0]
- c = c.transpose(1, 2)
- return c
-
-
-def get_vocoder(rank):
- with open("hifigan/config.json", "r") as f:
- config = json.load(f)
- config = hifigan.AttrDict(config)
- vocoder = hifigan.Generator(config)
- ckpt = torch.load("hifigan/generator_v1")
- vocoder.load_state_dict(ckpt["generator"])
- vocoder.eval()
- vocoder.remove_weight_norm()
- vocoder.cuda(rank)
- return vocoder
-
-
-def transform(mel, height): # 68-92
- #r = np.random.random()
- #rate = r * 0.3 + 0.85 # 0.85-1.15
- #height = int(mel.size(-2) * rate)
- tgt = torchvision.transforms.functional.resize(mel, (height, mel.size(-1)))
- if height >= mel.size(-2):
- return tgt[:, :mel.size(-2), :]
- else:
- silence = tgt[:,-1:,:].repeat(1,mel.size(-2)-height,1)
- silence += torch.randn_like(silence) / 10
- return torch.cat((tgt, silence), 1)
-
-
-def stretch(mel, width): # 0.5-2
- return torchvision.transforms.functional.resize(mel, (mel.size(-2), width))
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict= {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})" .format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
-
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10,2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py
deleted file mode 100644
index 77caafdbb300d8109d5bfdb844f131710ef81f20..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from easydict import EasyDict as edict
-
-# configs for test speed
-
-config = edict()
-config.loss = "arcface"
-config.network = "r50"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 0.1
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "synthetic"
-config.num_classes = 300 * 10000
-config.num_epoch = 30
-config.warmup_epoch = -1
-config.decay_epoch = [10, 16, 22]
-config.val_targets = []
diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_os.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_os.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/inference.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/inference.py
deleted file mode 100644
index 0bda414e67e4f2a6e829930c85352a0a41a7f6d9..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/inference.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from data_gen.tts.emotion.params_data import *
-from data_gen.tts.emotion.model import EmotionEncoder
-from data_gen.tts.emotion.audio import preprocess_wav # We want to expose this function from here
-from matplotlib import cm
-from data_gen.tts.emotion import audio
-from pathlib import Path
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-_model = None # type: EmotionEncoder
-_device = None # type: torch.device
-
-
-def load_model(weights_fpath: Path, device=None):
- """
- Loads the model in memory. If this function is not explicitely called, it will be run on the
- first call to embed_frames() with the default weights file.
-
- :param weights_fpath: the path to saved model weights.
- :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). The
- model will be loaded and will run on this device. Outputs will however always be on the cpu.
- If None, will default to your GPU if it"s available, otherwise your CPU.
- """
- # TODO: I think the slow loading of the encoder might have something to do with the device it
- # was saved on. Worth investigating.
- global _model, _device
- if device is None:
- _device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- elif isinstance(device, str):
- _device = torch.device(device)
- _model = EmotionEncoder(_device, torch.device("cpu"))
- checkpoint = torch.load(weights_fpath)
- _model.load_state_dict(checkpoint["model_state"])
- _model.eval()
- print("Loaded encoder trained to step %d" % (checkpoint["step"]))
-
-
-def is_loaded():
- return _model is not None
-
-
-def embed_frames_batch(frames_batch):
- """
- Computes embeddings for a batch of mel spectrogram.
-
- :param frames_batch: a batch mel of spectrogram as a numpy array of float32 of shape
- (batch_size, n_frames, n_channels)
- :return: the embeddings as a numpy array of float32 of shape (batch_size, model_embedding_size)
- """
- if _model is None:
- raise Exception("Model was not loaded. Call load_model() before inference.")
-
- frames = torch.from_numpy(frames_batch).to(_device)
- embed = _model.inference(frames).detach().cpu().numpy()
- return embed
-
-
-def compute_partial_slices(n_samples, partial_utterance_n_frames=partials_n_frames,
- min_pad_coverage=0.75, overlap=0.5):
- """
- Computes where to split an utterance waveform and its corresponding mel spectrogram to obtain
- partial utterances of each. Both the waveform and the mel
- spectrogram slices are returned, so as to make each partial utterance waveform correspond to
- its spectrogram. This function assumes that the mel spectrogram parameters used are those
- defined in params_data.py.
-
- The returned ranges may be indexing further than the length of the waveform. It is
- recommended that you pad the waveform with zeros up to wave_slices[-1].stop.
-
- :param n_samples: the number of samples in the waveform
- :param partial_utterance_n_frames: the number of mel spectrogram frames in each partial
- utterance
- :param min_pad_coverage: when reaching the last partial utterance, it may or may not have
- enough frames. If at least of are present,
- then the last partial utterance will be considered, as if we padded the audio. Otherwise,
- it will be discarded, as if we trimmed the audio. If there aren't enough frames for 1 partial
- utterance, this parameter is ignored so that the function always returns at least 1 slice.
- :param overlap: by how much the partial utterance should overlap. If set to 0, the partial
- utterances are entirely disjoint.
- :return: the waveform slices and mel spectrogram slices as lists of array slices. Index
- respectively the waveform and the mel spectrogram with these slices to obtain the partial
- utterances.
- """
- assert 0 <= overlap < 1
- assert 0 < min_pad_coverage <= 1
-
- samples_per_frame = int((sampling_rate * mel_window_step / 1000))
- n_frames = int(np.ceil((n_samples + 1) / samples_per_frame))
- frame_step = max(int(np.round(partial_utterance_n_frames * (1 - overlap))), 1)
-
- # Compute the slices
- wav_slices, mel_slices = [], []
- steps = max(1, n_frames - partial_utterance_n_frames + frame_step + 1)
- for i in range(0, steps, frame_step):
- mel_range = np.array([i, i + partial_utterance_n_frames])
- wav_range = mel_range * samples_per_frame
- mel_slices.append(slice(*mel_range))
- wav_slices.append(slice(*wav_range))
-
- # Evaluate whether extra padding is warranted or not
- last_wav_range = wav_slices[-1]
- coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start)
- if coverage < min_pad_coverage and len(mel_slices) > 1:
- mel_slices = mel_slices[:-1]
- wav_slices = wav_slices[:-1]
-
- return wav_slices, mel_slices
-
-
-def embed_utterance(wav, using_partials=True, return_partials=False, **kwargs):
- """
- Computes an embedding for a single utterance.
-
- # TODO: handle multiple wavs to benefit from batching on GPU
- :param wav: a preprocessed (see audio.py) utterance waveform as a numpy array of float32
- :param using_partials: if True, then the utterance is split in partial utterances of
- frames and the utterance embedding is computed from their
- normalized average. If False, the utterance is instead computed from feeding the entire
- spectogram to the network.
- :param return_partials: if True, the partial embeddings will also be returned along with the
- wav slices that correspond to the partial embeddings.
- :param kwargs: additional arguments to compute_partial_splits()
- :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If
- is True, the partial utterances as a numpy array of float32 of shape
- (n_partials, model_embedding_size) and the wav partials as a list of slices will also be
- returned. If is simultaneously set to False, both these values will be None
- instead.
- """
- # Process the entire utterance if not using partials
- if not using_partials:
- frames = audio.wav_to_mel_spectrogram(wav)
- embed = embed_frames_batch(frames[None, ...])[0]
- if return_partials:
- return embed, None, None
- return embed
-
- # Compute where to split the utterance into partials and pad if necessary
- wave_slices, mel_slices = compute_partial_slices(len(wav), **kwargs)
- max_wave_length = wave_slices[-1].stop
- if max_wave_length >= len(wav):
- wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant")
-
- # Split the utterance into partials
- frames = audio.wav_to_mel_spectrogram(wav)
- frames_batch = np.array([frames[s] for s in mel_slices])
- partial_embeds = embed_frames_batch(frames_batch)
-
- # Compute the utterance embedding from the partial embeddings
- raw_embed = np.mean(partial_embeds, axis=0)
- embed = raw_embed / np.linalg.norm(raw_embed, 2)
-
- if return_partials:
- return embed, partial_embeds, wave_slices
- return embed
-
-
-def embed_speaker(wavs, **kwargs):
- raise NotImplemented()
-
-
-def plot_embedding_as_heatmap(embed, ax=None, title="", shape=None, color_range=(0, 0.30)):
- if ax is None:
- ax = plt.gca()
-
- if shape is None:
- height = int(np.sqrt(len(embed)))
- shape = (height, -1)
- embed = embed.reshape(shape)
-
- cmap = cm.get_cmap()
- mappable = ax.imshow(embed, cmap=cmap)
- cbar = plt.colorbar(mappable, ax=ax, fraction=0.046, pad=0.04)
- cbar.set_clim(*color_range)
-
- ax.set_xticks([]), ax.set_yticks([])
- ax.set_title(title)
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/openaimodel.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/openaimodel.py
deleted file mode 100644
index 0a274d84dfe6ef3e02848861f5b7a7c7e242ca98..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/openaimodel.py
+++ /dev/null
@@ -1,963 +0,0 @@
-from abc import abstractmethod
-from functools import partial
-import math
-from typing import Iterable
-
-import numpy as np
-import torch as th
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ldm.modules.diffusionmodules.util import (
- checkpoint,
- conv_nd,
- linear,
- avg_pool_nd,
- zero_module,
- normalization,
- timestep_embedding,
-)
-from ldm.modules.attention import SpatialTransformer
-
-
-# dummy replace
-def convert_module_to_f16(x):
- pass
-
-def convert_module_to_f32(x):
- pass
-
-
-## go
-class AttentionPool2d(nn.Module):
- """
- Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py
- """
-
- def __init__(
- self,
- spacial_dim: int,
- embed_dim: int,
- num_heads_channels: int,
- output_dim: int = None,
- ):
- super().__init__()
- self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5)
- self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1)
- self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1)
- self.num_heads = embed_dim // num_heads_channels
- self.attention = QKVAttention(self.num_heads)
-
- def forward(self, x):
- b, c, *_spatial = x.shape
- x = x.reshape(b, c, -1) # NC(HW)
- x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1)
- x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1)
- x = self.qkv_proj(x)
- x = self.attention(x)
- x = self.c_proj(x)
- return x[:, :, 0]
-
-
-class TimestepBlock(nn.Module):
- """
- Any module where forward() takes timestep embeddings as a second argument.
- """
-
- @abstractmethod
- def forward(self, x, emb):
- """
- Apply the module to `x` given `emb` timestep embeddings.
- """
-
-
-class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
- """
- A sequential module that passes timestep embeddings to the children that
- support it as an extra input.
- """
-
- def forward(self, x, emb, context=None):
- for layer in self:
- if isinstance(layer, TimestepBlock):
- x = layer(x, emb)
- elif isinstance(layer, SpatialTransformer):
- x = layer(x, context)
- else:
- x = layer(x)
- return x
-
-
-class Upsample(nn.Module):
- """
- An upsampling layer with an optional convolution.
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- upsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- if use_conv:
- self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.dims == 3:
- x = F.interpolate(
- x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
- )
- else:
- x = F.interpolate(x, scale_factor=2, mode="nearest")
- if self.use_conv:
- x = self.conv(x)
- return x
-
-class TransposedUpsample(nn.Module):
- 'Learned 2x upsampling without padding'
- def __init__(self, channels, out_channels=None, ks=5):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
-
- self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2)
-
- def forward(self,x):
- return self.up(x)
-
-
-class Downsample(nn.Module):
- """
- A downsampling layer with an optional convolution.
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- downsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- stride = 2 if dims != 3 else (1, 2, 2)
- if use_conv:
- self.op = conv_nd(
- dims, self.channels, self.out_channels, 3, stride=stride, padding=padding
- )
- else:
- assert self.channels == self.out_channels
- self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- return self.op(x)
-
-
-class ResBlock(TimestepBlock):
- """
- A residual block that can optionally change the number of channels.
- :param channels: the number of input channels.
- :param emb_channels: the number of timestep embedding channels.
- :param dropout: the rate of dropout.
- :param out_channels: if specified, the number of out channels.
- :param use_conv: if True and out_channels is specified, use a spatial
- convolution instead of a smaller 1x1 convolution to change the
- channels in the skip connection.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param use_checkpoint: if True, use gradient checkpointing on this module.
- :param up: if True, use this block for upsampling.
- :param down: if True, use this block for downsampling.
- """
-
- def __init__(
- self,
- channels,
- emb_channels,
- dropout,
- out_channels=None,
- use_conv=False,
- use_scale_shift_norm=False,
- dims=2,
- use_checkpoint=False,
- up=False,
- down=False,
- ):
- super().__init__()
- self.channels = channels
- self.emb_channels = emb_channels
- self.dropout = dropout
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_checkpoint = use_checkpoint
- self.use_scale_shift_norm = use_scale_shift_norm
-
- self.in_layers = nn.Sequential(
- normalization(channels),
- nn.SiLU(),
- conv_nd(dims, channels, self.out_channels, 3, padding=1),
- )
-
- self.updown = up or down
-
- if up:
- self.h_upd = Upsample(channels, False, dims)
- self.x_upd = Upsample(channels, False, dims)
- elif down:
- self.h_upd = Downsample(channels, False, dims)
- self.x_upd = Downsample(channels, False, dims)
- else:
- self.h_upd = self.x_upd = nn.Identity()
-
- self.emb_layers = nn.Sequential(
- nn.SiLU(),
- linear(
- emb_channels,
- 2 * self.out_channels if use_scale_shift_norm else self.out_channels,
- ),
- )
- self.out_layers = nn.Sequential(
- normalization(self.out_channels),
- nn.SiLU(),
- nn.Dropout(p=dropout),
- zero_module(
- conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)
- ),
- )
-
- if self.out_channels == channels:
- self.skip_connection = nn.Identity()
- elif use_conv:
- self.skip_connection = conv_nd(
- dims, channels, self.out_channels, 3, padding=1
- )
- else:
- self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)
-
- def forward(self, x, emb):
- """
- Apply the block to a Tensor, conditioned on a timestep embedding.
- :param x: an [N x C x ...] Tensor of features.
- :param emb: an [N x emb_channels] Tensor of timestep embeddings.
- :return: an [N x C x ...] Tensor of outputs.
- """
- return checkpoint(
- self._forward, (x, emb), self.parameters(), self.use_checkpoint
- )
-
-
- def _forward(self, x, emb):
- if self.updown:
- in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]
- h = in_rest(x)
- h = self.h_upd(h)
- x = self.x_upd(x)
- h = in_conv(h)
- else:
- h = self.in_layers(x)
- emb_out = self.emb_layers(emb).type(h.dtype)
- while len(emb_out.shape) < len(h.shape):
- emb_out = emb_out[..., None]
- if self.use_scale_shift_norm:
- out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
- scale, shift = th.chunk(emb_out, 2, dim=1)
- h = out_norm(h) * (1 + scale) + shift
- h = out_rest(h)
- else:
- h = h + emb_out
- h = self.out_layers(h)
- return self.skip_connection(x) + h
-
-
-class AttentionBlock(nn.Module):
- """
- An attention block that allows spatial positions to attend to each other.
- Originally ported from here, but adapted to the N-d case.
- https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
- """
-
- def __init__(
- self,
- channels,
- num_heads=1,
- num_head_channels=-1,
- use_checkpoint=False,
- use_new_attention_order=False,
- ):
- super().__init__()
- self.channels = channels
- if num_head_channels == -1:
- self.num_heads = num_heads
- else:
- assert (
- channels % num_head_channels == 0
- ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}"
- self.num_heads = channels // num_head_channels
- self.use_checkpoint = use_checkpoint
- self.norm = normalization(channels)
- self.qkv = conv_nd(1, channels, channels * 3, 1)
- if use_new_attention_order:
- # split qkv before split heads
- self.attention = QKVAttention(self.num_heads)
- else:
- # split heads before split qkv
- self.attention = QKVAttentionLegacy(self.num_heads)
-
- self.proj_out = zero_module(conv_nd(1, channels, channels, 1))
-
- def forward(self, x):
- return checkpoint(self._forward, (x,), self.parameters(), True) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!!
- #return pt_checkpoint(self._forward, x) # pytorch
-
- def _forward(self, x):
- b, c, *spatial = x.shape
- x = x.reshape(b, c, -1)
- qkv = self.qkv(self.norm(x))
- h = self.attention(qkv)
- h = self.proj_out(h)
- return (x + h).reshape(b, c, *spatial)
-
-
-def count_flops_attn(model, _x, y):
- """
- A counter for the `thop` package to count the operations in an
- attention operation.
- Meant to be used like:
- macs, params = thop.profile(
- model,
- inputs=(inputs, timestamps),
- custom_ops={QKVAttention: QKVAttention.count_flops},
- )
- """
- b, c, *spatial = y[0].shape
- num_spatial = int(np.prod(spatial))
- # We perform two matmuls with the same number of ops.
- # The first computes the weight matrix, the second computes
- # the combination of the value vectors.
- matmul_ops = 2 * b * (num_spatial ** 2) * c
- model.total_ops += th.DoubleTensor([matmul_ops])
-
-
-class QKVAttentionLegacy(nn.Module):
- """
- A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
- :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = th.einsum(
- "bct,bcs->bts", q * scale, k * scale
- ) # More stable with f16 than dividing afterwards
- weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = th.einsum("bts,bcs->bct", weight, v)
- return a.reshape(bs, -1, length)
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-
-class QKVAttention(nn.Module):
- """
- A module which performs QKV attention and splits in a different order.
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
- :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.chunk(3, dim=1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = th.einsum(
- "bct,bcs->bts",
- (q * scale).view(bs * self.n_heads, ch, length),
- (k * scale).view(bs * self.n_heads, ch, length),
- ) # More stable with f16 than dividing afterwards
- weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length))
- return a.reshape(bs, -1, length)
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-
-class UNetModel(nn.Module):
- """
- The full UNet model with attention and timestep embedding.
- :param in_channels: channels in the input Tensor.
- :param model_channels: base channel count for the model.
- :param out_channels: channels in the output Tensor.
- :param num_res_blocks: number of residual blocks per downsample.
- :param attention_resolutions: a collection of downsample rates at which
- attention will take place. May be a set, list, or tuple.
- For example, if this contains 4, then at 4x downsampling, attention
- will be used.
- :param dropout: the dropout probability.
- :param channel_mult: channel multiplier for each level of the UNet.
- :param conv_resample: if True, use learned convolutions for upsampling and
- downsampling.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param num_classes: if specified (as an int), then this model will be
- class-conditional with `num_classes` classes.
- :param use_checkpoint: use gradient checkpointing to reduce memory usage.
- :param num_heads: the number of attention heads in each attention layer.
- :param num_heads_channels: if specified, ignore num_heads and instead use
- a fixed channel width per attention head.
- :param num_heads_upsample: works with num_heads to set a different number
- of heads for upsampling. Deprecated.
- :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
- :param resblock_updown: use residual blocks for up/downsampling.
- :param use_new_attention_order: use a different attention pattern for potentially
- increased efficiency.
- """
-
- def __init__(
- self,
- image_size,
- in_channels,
- model_channels,
- out_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- num_classes=None,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=-1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- use_spatial_transformer=False, # custom transformer support
- transformer_depth=1, # custom transformer support
- context_dim=None, # custom transformer support
- n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model
- legacy=True,
- ):
- super().__init__()
- if use_spatial_transformer:
- assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...'
-
- if context_dim is not None:
- assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'
- from omegaconf.listconfig import ListConfig
- if type(context_dim) == ListConfig:
- context_dim = list(context_dim)
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- if num_heads == -1:
- assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'
-
- if num_head_channels == -1:
- assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'
-
- self.image_size = image_size
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
- self.predict_codebook_ids = n_embed is not None
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- if self.num_classes is not None:
- self.label_emb = nn.Embedding(num_classes, time_embed_dim)
-
- self.input_blocks = nn.ModuleList(
- [
- TimestepEmbedSequential(
- conv_nd(dims, in_channels, model_channels, 3, padding=1)# conv2d for txt2img/audio
- )
- ]
- )
- self._feature_size = model_channels
- input_block_chans = [model_channels]
- ch = model_channels
- ds = 1
- # downsample blocks
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=mult * model_channels,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = mult * model_channels
- if ds in attention_resolutions:
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- #num_heads = 1
- dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- ) if not use_spatial_transformer else SpatialTransformer(# transformer_depth is 1
- ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- #num_heads = 1
- dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- ) if not use_spatial_transformer else SpatialTransformer(
- ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
- # upsample blocks
- self.output_blocks = nn.ModuleList([])
- for level, mult in list(enumerate(channel_mult))[::-1]:
- for i in range(num_res_blocks + 1):
- ich = input_block_chans.pop()
- layers = [
- ResBlock(
- ch + ich,
- time_embed_dim,
- dropout,
- out_channels=model_channels * mult,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = model_channels * mult
- if ds in attention_resolutions:
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- #num_heads = 1
- dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads_upsample,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- ) if not use_spatial_transformer else SpatialTransformer(
- ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
- )
- )
- if level and i == num_res_blocks:
- out_ch = ch
- layers.append(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- up=True,
- )
- if resblock_updown
- else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
- )
- ds //= 2
- self.output_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
-
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)),
- )
- if self.predict_codebook_ids:
- self.id_predictor = nn.Sequential(
- normalization(ch),
- conv_nd(dims, model_channels, n_embed, 1),
- #nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits
- )
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
- self.output_blocks.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
- self.output_blocks.apply(convert_module_to_f32)
-
- def forward(self, x, timesteps=None, context=None, y=None,**kwargs):
- """
- Apply the model to an input batch.
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps,shape [N]
- :param context: conditioning plugged in via crossattn. for txt2img shape is [N,77,context_dim]
- :param y: an [N] Tensor of labels, if class-conditional.
- :return: an [N x C x ...] Tensor of outputs.
- """
- # print(f"in unet {x.shape}")
- assert (y is not None) == (
- self.num_classes is not None
- ), "must specify y if and only if the model is class-conditional"
- hs = []
- t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)# shape [N,self.model_channels]
- emb = self.time_embed(t_emb)# shape [N,context_dim]
-
- if self.num_classes is not None:# only for class label
- assert y.shape == (x.shape[0],)
- emb = emb + self.label_emb(y)
-
- h = x.type(self.dtype)# [N,C,10,106]
- for module in self.input_blocks:
- h = module(h, emb, context)# 0:[N,self.model_channels,10,106],1:[N,self.model_channels,10,106],2:[N,self.model_channels,10,106] 3:[N,self.model_channels,5,53] 4:[N,self.model_channels,5,53] 5:[N,self.model_channels*2,5,53]
- hs.append(h)
- h = self.middle_block(h, emb, context)# no shape change
- for module in self.output_blocks:
- h = th.cat([h, hs.pop()], dim=1)# 在这里c维度乘2或+self.model_channels,其余维度不变
- h = module(h, emb, context)# 在这里c维度/2回到之前维度,h,w不变或*2
- h = h.type(x.dtype)# 至此h维度和输入x维度回到相同状态
- if self.predict_codebook_ids:
- return self.id_predictor(h)
- else:
- return self.out(h)
-
-
-class EncoderUNetModel(nn.Module):
- """
- The half UNet model with attention and timestep embedding.
- For usage, see UNet.
- """
-
- def __init__(
- self,
- image_size,
- in_channels,
- model_channels,
- out_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- pool="adaptive",
- *args,
- **kwargs
- ):
- super().__init__()
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- self.input_blocks = nn.ModuleList(
- [
- TimestepEmbedSequential(
- conv_nd(dims, in_channels, model_channels, 3, padding=1)
- )
- ]
- )
- self._feature_size = model_channels
- input_block_chans = [model_channels]
- ch = model_channels
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=mult * model_channels,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = mult * model_channels
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
- self.pool = pool
- if pool == "adaptive":
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- nn.AdaptiveAvgPool2d((1, 1)),
- zero_module(conv_nd(dims, ch, out_channels, 1)),
- nn.Flatten(),
- )
- elif pool == "attention":
- assert num_head_channels != -1
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- AttentionPool2d(
- (image_size // ds), ch, num_head_channels, out_channels
- ),
- )
- elif pool == "spatial":
- self.out = nn.Sequential(
- nn.Linear(self._feature_size, 2048),
- nn.ReLU(),
- nn.Linear(2048, self.out_channels),
- )
- elif pool == "spatial_v2":
- self.out = nn.Sequential(
- nn.Linear(self._feature_size, 2048),
- normalization(2048),
- nn.SiLU(),
- nn.Linear(2048, self.out_channels),
- )
- else:
- raise NotImplementedError(f"Unexpected {pool} pooling")
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
-
- def forward(self, x, timesteps):
- """
- Apply the model to an input batch.
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :return: an [N x K] Tensor of outputs.
- """
- emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
-
- results = []
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb)
- if self.pool.startswith("spatial"):
- results.append(h.type(x.dtype).mean(dim=(2, 3)))
- h = self.middle_block(h, emb)
- if self.pool.startswith("spatial"):
- results.append(h.type(x.dtype).mean(dim=(2, 3)))
- h = th.cat(results, axis=-1)
- return self.out(h)
- else:
- h = h.type(x.dtype)
- return self.out(h)
-
diff --git a/spaces/AILab-CVC/EvalCrafter/test.py b/spaces/AILab-CVC/EvalCrafter/test.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/AiService.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/AiService.py
deleted file mode 100644
index 2b5a6e7de3912f7588377a881b7d5523e35d7212..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/AiService.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from __future__ import annotations
-
-import requests
-
-from ..typing import Any, CreateResult
-from .base_provider import BaseProvider
-
-
-class AiService(BaseProvider):
- url = "https://aiservice.vercel.app/"
- working = False
- supports_gpt_35_turbo = True
-
- @staticmethod
- def create_completion(
- model: str,
- messages: list[dict[str, str]],
- stream: bool,
- **kwargs: Any,
- ) -> CreateResult:
- base = "\n".join(f"{message['role']}: {message['content']}" for message in messages)
- base += "\nassistant: "
-
- headers = {
- "accept": "*/*",
- "content-type": "text/plain;charset=UTF-8",
- "sec-fetch-dest": "empty",
- "sec-fetch-mode": "cors",
- "sec-fetch-site": "same-origin",
- "Referer": "https://aiservice.vercel.app/chat",
- }
- data = {"input": base}
- url = "https://aiservice.vercel.app/api/chat/answer"
- response = requests.post(url, headers=headers, json=data)
- response.raise_for_status()
- yield response.json()["data"]
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Bing.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Bing.py
deleted file mode 100644
index f4275a5f54d23bedf2392aad143058c6245bbb00..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Bing.py
+++ /dev/null
@@ -1,300 +0,0 @@
-from __future__ import annotations
-
-import random
-import uuid
-import json
-import os
-import uuid
-import urllib.parse
-from aiohttp import ClientSession, ClientTimeout
-from ..typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-
-class Tones():
- creative = "Creative"
- balanced = "Balanced"
- precise = "Precise"
-
-default_cookies = {
- 'SRCHD' : 'AF=NOFORM',
- 'PPLState' : '1',
- 'KievRPSSecAuth': '',
- 'SUID' : '',
- 'SRCHUSR' : '',
- 'SRCHHPGUSR' : '',
-}
-
-class Bing(AsyncGeneratorProvider):
- url = "https://bing.com/chat"
- working = True
- supports_gpt_4 = True
-
- @staticmethod
- def create_async_generator(
- model: str,
- messages: list[dict[str, str]],
- cookies: dict = None,
- tone: str = Tones.creative,
- **kwargs
- ) -> AsyncGenerator:
- if len(messages) < 2:
- prompt = messages[0]["content"]
- context = None
- else:
- prompt = messages[-1]["content"]
- context = create_context(messages[:-1])
-
- if not cookies or "SRCHD" not in cookies:
- cookies = default_cookies
- return stream_generate(prompt, tone, context, cookies)
-
-def create_context(messages: list[dict[str, str]]):
- context = "".join(f"[{message['role']}](#message)\n{message['content']}\n\n" for message in messages)
-
- return context
-
-class Conversation():
- def __init__(self, conversationId: str, clientId: str, conversationSignature: str) -> None:
- self.conversationId = conversationId
- self.clientId = clientId
- self.conversationSignature = conversationSignature
-
-async def create_conversation(session: ClientSession) -> Conversation:
- url = 'https://www.bing.com/turing/conversation/create?bundleVersion=1.1150.3'
-
- async with await session.get(url) as response:
- data = await response.json()
-
- conversationId = data.get('conversationId')
- clientId = data.get('clientId')
- conversationSignature = response.headers.get('X-Sydney-Encryptedconversationsignature')
-
- if not conversationId or not clientId or not conversationSignature:
- raise Exception('Failed to create conversation.')
-
- return Conversation(conversationId, clientId, conversationSignature)
-
-async def list_conversations(session: ClientSession) -> list:
- url = "https://www.bing.com/turing/conversation/chats"
- async with session.get(url) as response:
- response = await response.json()
- return response["chats"]
-
-async def delete_conversation(session: ClientSession, conversation: Conversation) -> list:
- url = "https://sydney.bing.com/sydney/DeleteSingleConversation"
- json = {
- "conversationId": conversation.conversationId,
- "conversationSignature": conversation.conversationSignature,
- "participant": {"id": conversation.clientId},
- "source": "cib",
- "optionsSets": ["autosave"]
- }
- async with session.post(url, json=json) as response:
- response = await response.json()
- return response["result"]["value"] == "Success"
-
-class Defaults:
- delimiter = "\x1e"
- ip_address = f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
-
- allowedMessageTypes = [
- "Chat",
- "Disengaged",
- "AdsQuery",
- "SemanticSerp",
- "GenerateContentQuery",
- "SearchQuery",
- "ActionRequest",
- "Context",
- "Progress",
- "AdsQuery",
- "SemanticSerp",
- ]
-
- sliceIds = [
- "winmuid3tf",
- "osbsdusgreccf",
- "ttstmout",
- "crchatrev",
- "winlongmsgtf",
- "ctrlworkpay",
- "norespwtf",
- "tempcacheread",
- "temptacache",
- "505scss0",
- "508jbcars0",
- "515enbotdets0",
- "5082tsports",
- "515vaoprvs",
- "424dagslnv1s0",
- "kcimgattcf",
- "427startpms0",
- ]
-
- location = {
- "locale": "en-US",
- "market": "en-US",
- "region": "US",
- "locationHints": [
- {
- "country": "United States",
- "state": "California",
- "city": "Los Angeles",
- "timezoneoffset": 8,
- "countryConfidence": 8,
- "Center": {"Latitude": 34.0536909, "Longitude": -118.242766},
- "RegionType": 2,
- "SourceType": 1,
- }
- ],
- }
-
- headers = {
- 'accept': '*/*',
- 'accept-language': 'en-US,en;q=0.9',
- 'cache-control': 'max-age=0',
- 'sec-ch-ua': '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
- 'sec-ch-ua-arch': '"x86"',
- 'sec-ch-ua-bitness': '"64"',
- 'sec-ch-ua-full-version': '"110.0.1587.69"',
- 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-model': '""',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-ch-ua-platform-version': '"15.0.0"',
- 'sec-fetch-dest': 'document',
- 'sec-fetch-mode': 'navigate',
- 'sec-fetch-site': 'none',
- 'sec-fetch-user': '?1',
- 'upgrade-insecure-requests': '1',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69',
- 'x-edge-shopping-flag': '1',
- 'x-forwarded-for': ip_address,
- }
-
- optionsSets = [
- 'saharasugg',
- 'enablenewsfc',
- 'clgalileo',
- 'gencontentv3',
- "nlu_direct_response_filter",
- "deepleo",
- "disable_emoji_spoken_text",
- "responsible_ai_policy_235",
- "enablemm",
- "h3precise"
- "dtappid",
- "cricinfo",
- "cricinfov2",
- "dv3sugg",
- "nojbfedge"
- ]
-
-def format_message(msg: dict) -> str:
- return json.dumps(msg, ensure_ascii=False) + Defaults.delimiter
-
-def create_message(conversation: Conversation, prompt: str, tone: str, context: str=None) -> str:
- request_id = str(uuid.uuid4())
- struct = {
- 'arguments': [
- {
- 'source': 'cib',
- 'optionsSets': Defaults.optionsSets,
- 'allowedMessageTypes': Defaults.allowedMessageTypes,
- 'sliceIds': Defaults.sliceIds,
- 'traceId': os.urandom(16).hex(),
- 'isStartOfSession': True,
- 'requestId': request_id,
- 'message': Defaults.location | {
- 'author': 'user',
- 'inputMethod': 'Keyboard',
- 'text': prompt,
- 'messageType': 'Chat',
- 'requestId': request_id,
- 'messageId': request_id,
- },
- 'tone': tone,
- 'spokenTextMode': 'None',
- 'conversationId': conversation.conversationId,
- 'participant': {
- 'id': conversation.clientId
- },
- }
- ],
- 'invocationId': '1',
- 'target': 'chat',
- 'type': 4
- }
-
- if context:
- struct['arguments'][0]['previousMessages'] = [{
- "author": "user",
- "description": context,
- "contextType": "WebPage",
- "messageType": "Context",
- "messageId": "discover-web--page-ping-mriduna-----"
- }]
- return format_message(struct)
-
-async def stream_generate(
- prompt: str,
- tone: str,
- context: str=None,
- cookies: dict=None,
- ):
- async with ClientSession(
- timeout=ClientTimeout(total=900),
- cookies=cookies,
- headers=Defaults.headers,
- ) as session:
- conversation = await create_conversation(session)
- try:
- async with session.ws_connect(
- f'wss://sydney.bing.com/sydney/ChatHub',
- autoping=False,
- params={'sec_access_token': conversation.conversationSignature}
- ) as wss:
-
- await wss.send_str(format_message({'protocol': 'json', 'version': 1}))
- await wss.receive(timeout=900)
- await wss.send_str(create_message(conversation, prompt, tone, context))
-
- response_txt = ''
- returned_text = ''
- final = False
-
- while not final:
- msg = await wss.receive(timeout=900)
- objects = msg.data.split(Defaults.delimiter)
- for obj in objects:
- if obj is None or not obj:
- continue
-
- response = json.loads(obj)
- if response.get('type') == 1 and response['arguments'][0].get('messages'):
- message = response['arguments'][0]['messages'][0]
- if (message['contentOrigin'] != 'Apology'):
- if 'adaptiveCards' in message:
- card = message['adaptiveCards'][0]['body'][0]
- if "text" in card:
- response_txt = card.get('text')
- if message.get('messageType'):
- inline_txt = card['inlines'][0].get('text')
- response_txt += inline_txt + '\n'
- elif message.get('contentType') == "IMAGE":
- query = urllib.parse.quote(message.get('text'))
- url = f"\nhttps://www.bing.com/images/create?q={query}"
- response_txt += url
- final = True
- if response_txt.startswith(returned_text):
- new = response_txt[len(returned_text):]
- if new != "\n":
- yield new
- returned_text = response_txt
- elif response.get('type') == 2:
- result = response['item']['result']
- if result.get('error'):
- raise Exception(f"{result['value']}: {result['message']}")
- return
- finally:
- await delete_conversation(session, conversation)
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toonifypipeline.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toonifypipeline.d.ts
deleted file mode 100644
index 61b0178bbe83e7ea81586fc0220006872ac2ed87..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toonifypipeline.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import ToonifyPostFxPipeline from './shaders/toonify/ToonifyPostFxPipeline';
-export default ToonifyPostFxPipeline;
\ No newline at end of file
diff --git a/spaces/Akim/claudeAPI/README.md b/spaces/Akim/claudeAPI/README.md
deleted file mode 100644
index d7c94bc5de4370e3386ce5ea988a32429a4b52d3..0000000000000000000000000000000000000000
--- a/spaces/Akim/claudeAPI/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: ClaudeAPI
-emoji: 😻
-colorFrom: indigo
-colorTo: blue
-sdk: docker
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/cantonese.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/cantonese.py
deleted file mode 100644
index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/cantonese.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import re
-import cn2an
-import opencc
-
-
-converter = opencc.OpenCC('jyutjyu')
-
-# List of (Latin alphabet, ipa) pairs:
-_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('A', 'ei˥'),
- ('B', 'biː˥'),
- ('C', 'siː˥'),
- ('D', 'tiː˥'),
- ('E', 'iː˥'),
- ('F', 'e˥fuː˨˩'),
- ('G', 'tsiː˥'),
- ('H', 'ɪk̚˥tsʰyː˨˩'),
- ('I', 'ɐi˥'),
- ('J', 'tsei˥'),
- ('K', 'kʰei˥'),
- ('L', 'e˥llou˨˩'),
- ('M', 'ɛːm˥'),
- ('N', 'ɛːn˥'),
- ('O', 'ou˥'),
- ('P', 'pʰiː˥'),
- ('Q', 'kʰiːu˥'),
- ('R', 'aː˥lou˨˩'),
- ('S', 'ɛː˥siː˨˩'),
- ('T', 'tʰiː˥'),
- ('U', 'juː˥'),
- ('V', 'wiː˥'),
- ('W', 'tʊk̚˥piː˥juː˥'),
- ('X', 'ɪk̚˥siː˨˩'),
- ('Y', 'waːi˥'),
- ('Z', 'iː˨sɛːt̚˥')
-]]
-
-
-def number_to_cantonese(text):
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text)
-
-
-def latin_to_ipa(text):
- for regex, replacement in _latin_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def cantonese_to_ipa(text):
- text = number_to_cantonese(text.upper())
- text = converter.convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/AlhitawiMohammed22/E2E_OCR/app.py b/spaces/AlhitawiMohammed22/E2E_OCR/app.py
deleted file mode 100644
index 930c413e962b647dcbe086c80919b56976ccb8ae..0000000000000000000000000000000000000000
--- a/spaces/AlhitawiMohammed22/E2E_OCR/app.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import logging
-import time
-from pathlib import Path
-import contextlib
-
-logging.basicConfig(
- level=logging.INFO,
- format="%(asctime)s - %(levelname)s - %(message)s",
-)
-
-
-import gradio as gr
-import nltk
-import torch
-from det2rec import *
-
-_here = Path(__file__).parent
-
-nltk.download("stopwords") # TODO=find where this requirement originates from
-
-
-def load_uploaded_file(file_obj, temp_dir: Path = None):
- """
- load_uploaded_file - process an uploaded file
- Args:
- file_obj (POTENTIALLY list): Gradio file object inside a list
- Returns:
- str, the uploaded file contents
- """
-
- # check if mysterious file object is a list
- if isinstance(file_obj, list):
- file_obj = file_obj[0]
- file_path = Path(file_obj.name)
-
- if temp_dir is None:
- _temp_dir = _here / "temp"
- _temp_dir.mkdir(exist_ok=True)
-
- try:
- pdf_bytes_obj = open(file_path, "rb").read()
- temp_path = temp_dir / file_path.name if temp_dir else file_path
- # save to PDF file
- with open(temp_path, "wb") as f:
- f.write(pdf_bytes_obj)
- logging.info(f"The uploaded file saved to {temp_path}")
- return str(temp_path.resolve())
-
- except Exception as e:
- logging.error(f"Trying to load file with path {file_path}, error: {e}")
- print(f"Trying to load file with path {file_path}, error: {e}")
- return None
-
-
-def convert_PDF(
- pdf_obj,
- language: str = "en",
- max_pages=20,
-):
- """
- convert_PDF - convert a PDF file to text
- Args:
- pdf_bytes_obj (bytes): PDF file contents
- language (str, optional): Language to use for OCR. Defaults to "en".
- Returns:
- str, the PDF file contents as text
- """
- # clear local text cache
- rm_local_text_files()
- global ocr_model
- st = time.perf_counter()
- if isinstance(pdf_obj, list):
- pdf_obj = pdf_obj[0]
- file_path = Path(pdf_obj.name)
- if not file_path.suffix == ".pdf":
- logging.error(f"File {file_path} is not a PDF file")
-
- html_error = f"""
-
- File {file_path} is not a PDF file. Please upload a PDF file.
-
- """
- return "File is not a PDF file", html_error, None
-
- conversion_stats = convert_PDF_to_Text(
- file_path,
- ocr_model=ocr_model,
- max_pages=max_pages,
- )
- converted_txt = conversion_stats["converted_text"]
- num_pages = conversion_stats["num_pages"]
- was_truncated = conversion_stats["truncated"]
- # if alt_lang: # TODO: fix this
-
- rt = round((time.perf_counter() - st) / 60, 2)
- print(f"Runtime: {rt} minutes")
- html = ""
- if was_truncated:
- html += f"
WARNING - PDF was truncated to {max_pages} pages
"
- html += f"
Runtime: {rt} minutes on CPU for {num_pages} pages
"
-
- _output_name = f"RESULT_{file_path.stem}_OCR.txt"
- with open(_output_name, "w", encoding="utf-8", errors="ignore") as f:
- f.write(converted_txt)
-
- return converted_txt, html, _output_name
-
-
-if __name__ == "__main__":
- logging.info("Starting app")
-
- use_GPU = torch.cuda.is_available()
- logging.info(f"Using GPU status: {use_GPU}")
- logging.info("Loading OCR model")
- with contextlib.redirect_stdout(None):
- ocr_model = ocr_predictor(
- "db_resnet50",
- "crnn_mobilenet_v3_large",
- pretrained=True,
- assume_straight_pages=True,
- )
-
- # define pdf bytes as None
- pdf_obj = _here / "exampler.pdf"
- pdf_obj = str(pdf_obj.resolve())
- _temp_dir = _here / "temp"
- _temp_dir.mkdir(exist_ok=True)
-
- logging.info("starting demo")
- demo = gr.Blocks()
-
- with demo:
-
- gr.Markdown("# PDF to Text")
- gr.Markdown(
- "A basic demo for end-to-end text detection and recognition where the input will be in pdf format and the result is text conversion using OCR from the [doctr](https://mindee.github.io/doctr/index.html) package"
- )
- gr.Markdown("---")
- gr.Markdown("---")
-
- with gr.Column():
-
- gr.Markdown("## Load Inputs")
- gr.Markdown("Upload your own file & replace the default. Files should be < 10MB to avoid upload issues - search for a PDF compressor online as needed.")
- gr.Markdown(
- "_If no file is uploaded, a sample PDF will be used. PDFs are truncated to 20 pages._"
- )
-
- uploaded_file = gr.File(
- label="Upload a PDF file",
- file_count="single",
- type="file",
- value=_here / "exampler.pdf",
- )
-
- gr.Markdown("---")
-
- with gr.Column():
- gr.Markdown("## Convert PDF to Text")
- convert_button = gr.Button("Convert PDF!", variant="primary")
- out_placeholder = gr.HTML("
Output will appear below:
")
- gr.Markdown("### Output")
- OCR_text = gr.Textbox(
- label="OCR Result", placeholder="The OCR text will appear here"
- )
- text_file = gr.File(
- label="Download Text File",
- file_count="single",
- type="file",
- interactive=False,
- )
-
- convert_button.click(
- fn=convert_PDF,
- inputs=[uploaded_file],
- outputs=[OCR_text, out_placeholder, text_file],
- )
- demo.launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/AliUsama98/Aliusama_spellchecker/README.md b/spaces/AliUsama98/Aliusama_spellchecker/README.md
deleted file mode 100644
index 0173f8a4c450c1ad396465a73c7f8fd442085699..0000000000000000000000000000000000000000
--- a/spaces/AliUsama98/Aliusama_spellchecker/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Aliusama Spellchecker
-emoji: 👀
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Alpaca233/ChatPDF-GUI/gpt_reader/prompt.py b/spaces/Alpaca233/ChatPDF-GUI/gpt_reader/prompt.py
deleted file mode 100644
index bd030e33da02d538631312e5c29dfac0eb49fad4..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/ChatPDF-GUI/gpt_reader/prompt.py
+++ /dev/null
@@ -1,26 +0,0 @@
-BASE_POINTS = """
-1. Who are the authors?
-2. What is the process of the proposed method?
-3. What is the performance of the proposed method? Please note down its performance metrics.
-4. What are the baseline models and their performances? Please note down these baseline methods.
-5. What dataset did this paper use?
-"""
-
-READING_PROMPT = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{}
-"""
-
-READING_PROMT_V2 = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{},
-
-And You need to generate a brief but informative title for this part.
-Your return format:
-- title: '...'
-- summary: '...'
-"""
-
-SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper."
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/dataset.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/dataset.py
deleted file mode 100644
index 96bbb8bb6da99122f350bc8e1a6390245840e32b..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/dataset.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import numbers
-import os
-import queue as Queue
-import threading
-
-import mxnet as mx
-import numpy as np
-import torch
-from torch.utils.data import DataLoader, Dataset
-from torchvision import transforms
-
-
-class BackgroundGenerator(threading.Thread):
- def __init__(self, generator, local_rank, max_prefetch=6):
- super(BackgroundGenerator, self).__init__()
- self.queue = Queue.Queue(max_prefetch)
- self.generator = generator
- self.local_rank = local_rank
- self.daemon = True
- self.start()
-
- def run(self):
- torch.cuda.set_device(self.local_rank)
- for item in self.generator:
- self.queue.put(item)
- self.queue.put(None)
-
- def next(self):
- next_item = self.queue.get()
- if next_item is None:
- raise StopIteration
- return next_item
-
- def __next__(self):
- return self.next()
-
- def __iter__(self):
- return self
-
-
-class DataLoaderX(DataLoader):
-
- def __init__(self, local_rank, **kwargs):
- super(DataLoaderX, self).__init__(**kwargs)
- self.stream = torch.cuda.Stream(local_rank)
- self.local_rank = local_rank
-
- def __iter__(self):
- self.iter = super(DataLoaderX, self).__iter__()
- self.iter = BackgroundGenerator(self.iter, self.local_rank)
- self.preload()
- return self
-
- def preload(self):
- self.batch = next(self.iter, None)
- if self.batch is None:
- return None
- with torch.cuda.stream(self.stream):
- for k in range(len(self.batch)):
- self.batch[k] = self.batch[k].to(device=self.local_rank, non_blocking=True)
-
- def __next__(self):
- torch.cuda.current_stream().wait_stream(self.stream)
- batch = self.batch
- if batch is None:
- raise StopIteration
- self.preload()
- return batch
-
-
-class MXFaceDataset(Dataset):
- def __init__(self, root_dir, local_rank):
- super(MXFaceDataset, self).__init__()
- self.transform = transforms.Compose(
- [transforms.ToPILImage(),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
- ])
- self.root_dir = root_dir
- self.local_rank = local_rank
- path_imgrec = os.path.join(root_dir, 'train.rec')
- path_imgidx = os.path.join(root_dir, 'train.idx')
- self.imgrec = mx.recordio.MXIndexedRecordIO(path_imgidx, path_imgrec, 'r')
- s = self.imgrec.read_idx(0)
- header, _ = mx.recordio.unpack(s)
- if header.flag > 0:
- self.header0 = (int(header.label[0]), int(header.label[1]))
- self.imgidx = np.array(range(1, int(header.label[0])))
- else:
- self.imgidx = np.array(list(self.imgrec.keys))
-
- def __getitem__(self, index):
- idx = self.imgidx[index]
- s = self.imgrec.read_idx(idx)
- header, img = mx.recordio.unpack(s)
- label = header.label
- if not isinstance(label, numbers.Number):
- label = label[0]
- label = torch.tensor(label, dtype=torch.long)
- sample = mx.image.imdecode(img).asnumpy()
- if self.transform is not None:
- sample = self.transform(sample)
- return sample, label
-
- def __len__(self):
- return len(self.imgidx)
-
-
-class SyntheticDataset(Dataset):
- def __init__(self, local_rank):
- super(SyntheticDataset, self).__init__()
- img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32)
- img = np.transpose(img, (2, 0, 1))
- img = torch.from_numpy(img).squeeze(0).float()
- img = ((img / 255) - 0.5) / 0.5
- self.img = img
- self.label = 1
-
- def __getitem__(self, index):
- return self.img, self.label
-
- def __len__(self):
- return 1000000
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/partial_fc.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/partial_fc.py
deleted file mode 100644
index 17e2d25715d10ba446c957e1d2528b0687ed71d5..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/partial_fc.py
+++ /dev/null
@@ -1,222 +0,0 @@
-import logging
-import os
-
-import torch
-import torch.distributed as dist
-from torch.nn import Module
-from torch.nn.functional import normalize, linear
-from torch.nn.parameter import Parameter
-
-
-class PartialFC(Module):
- """
- Author: {Xiang An, Yang Xiao, XuHan Zhu} in DeepGlint,
- Partial FC: Training 10 Million Identities on a Single Machine
- See the original paper:
- https://arxiv.org/abs/2010.05222
- """
-
- @torch.no_grad()
- def __init__(self, rank, local_rank, world_size, batch_size, resume,
- margin_softmax, num_classes, sample_rate=1.0, embedding_size=512, prefix="./"):
- """
- rank: int
- Unique process(GPU) ID from 0 to world_size - 1.
- local_rank: int
- Unique process(GPU) ID within the server from 0 to 7.
- world_size: int
- Number of GPU.
- batch_size: int
- Batch size on current rank(GPU).
- resume: bool
- Select whether to restore the weight of softmax.
- margin_softmax: callable
- A function of margin softmax, eg: cosface, arcface.
- num_classes: int
- The number of class center storage in current rank(CPU/GPU), usually is total_classes // world_size,
- required.
- sample_rate: float
- The partial fc sampling rate, when the number of classes increases to more than 2 millions, Sampling
- can greatly speed up training, and reduce a lot of GPU memory, default is 1.0.
- embedding_size: int
- The feature dimension, default is 512.
- prefix: str
- Path for save checkpoint, default is './'.
- """
- super(PartialFC, self).__init__()
- #
- self.num_classes: int = num_classes
- self.rank: int = rank
- self.local_rank: int = local_rank
- self.device: torch.device = torch.device("cuda:{}".format(self.local_rank))
- self.world_size: int = world_size
- self.batch_size: int = batch_size
- self.margin_softmax: callable = margin_softmax
- self.sample_rate: float = sample_rate
- self.embedding_size: int = embedding_size
- self.prefix: str = prefix
- self.num_local: int = num_classes // world_size + int(rank < num_classes % world_size)
- self.class_start: int = num_classes // world_size * rank + min(rank, num_classes % world_size)
- self.num_sample: int = int(self.sample_rate * self.num_local)
-
- self.weight_name = os.path.join(self.prefix, "rank_{}_softmax_weight.pt".format(self.rank))
- self.weight_mom_name = os.path.join(self.prefix, "rank_{}_softmax_weight_mom.pt".format(self.rank))
-
- if resume:
- try:
- self.weight: torch.Tensor = torch.load(self.weight_name)
- self.weight_mom: torch.Tensor = torch.load(self.weight_mom_name)
- if self.weight.shape[0] != self.num_local or self.weight_mom.shape[0] != self.num_local:
- raise IndexError
- logging.info("softmax weight resume successfully!")
- logging.info("softmax weight mom resume successfully!")
- except (FileNotFoundError, KeyError, IndexError):
- self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)
- self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)
- logging.info("softmax weight init!")
- logging.info("softmax weight mom init!")
- else:
- self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)
- self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)
- logging.info("softmax weight init successfully!")
- logging.info("softmax weight mom init successfully!")
- self.stream: torch.cuda.Stream = torch.cuda.Stream(local_rank)
-
- self.index = None
- if int(self.sample_rate) == 1:
- self.update = lambda: 0
- self.sub_weight = Parameter(self.weight)
- self.sub_weight_mom = self.weight_mom
- else:
- self.sub_weight = Parameter(torch.empty((0, 0)).cuda(local_rank))
-
- def save_params(self):
- """ Save softmax weight for each rank on prefix
- """
- torch.save(self.weight.data, self.weight_name)
- torch.save(self.weight_mom, self.weight_mom_name)
-
- @torch.no_grad()
- def sample(self, total_label):
- """
- Sample all positive class centers in each rank, and random select neg class centers to filling a fixed
- `num_sample`.
-
- total_label: tensor
- Label after all gather, which cross all GPUs.
- """
- index_positive = (self.class_start <= total_label) & (total_label < self.class_start + self.num_local)
- total_label[~index_positive] = -1
- total_label[index_positive] -= self.class_start
- if int(self.sample_rate) != 1:
- positive = torch.unique(total_label[index_positive], sorted=True)
- if self.num_sample - positive.size(0) >= 0:
- perm = torch.rand(size=[self.num_local], device=self.device)
- perm[positive] = 2.0
- index = torch.topk(perm, k=self.num_sample)[1]
- index = index.sort()[0]
- else:
- index = positive
- self.index = index
- total_label[index_positive] = torch.searchsorted(index, total_label[index_positive])
- self.sub_weight = Parameter(self.weight[index])
- self.sub_weight_mom = self.weight_mom[index]
-
- def forward(self, total_features, norm_weight):
- """ Partial fc forward, `logits = X * sample(W)`
- """
- torch.cuda.current_stream().wait_stream(self.stream)
- logits = linear(total_features, norm_weight)
- return logits
-
- @torch.no_grad()
- def update(self):
- """ Set updated weight and weight_mom to memory bank.
- """
- self.weight_mom[self.index] = self.sub_weight_mom
- self.weight[self.index] = self.sub_weight
-
- def prepare(self, label, optimizer):
- """
- get sampled class centers for cal softmax.
-
- label: tensor
- Label tensor on each rank.
- optimizer: opt
- Optimizer for partial fc, which need to get weight mom.
- """
- with torch.cuda.stream(self.stream):
- total_label = torch.zeros(
- size=[self.batch_size * self.world_size], device=self.device, dtype=torch.long)
- dist.all_gather(list(total_label.chunk(self.world_size, dim=0)), label)
- self.sample(total_label)
- optimizer.state.pop(optimizer.param_groups[-1]['params'][0], None)
- optimizer.param_groups[-1]['params'][0] = self.sub_weight
- optimizer.state[self.sub_weight]['momentum_buffer'] = self.sub_weight_mom
- norm_weight = normalize(self.sub_weight)
- return total_label, norm_weight
-
- def forward_backward(self, label, features, optimizer):
- """
- Partial fc forward and backward with model parallel
-
- label: tensor
- Label tensor on each rank(GPU)
- features: tensor
- Features tensor on each rank(GPU)
- optimizer: optimizer
- Optimizer for partial fc
-
- Returns:
- --------
- x_grad: tensor
- The gradient of features.
- loss_v: tensor
- Loss value for cross entropy.
- """
- total_label, norm_weight = self.prepare(label, optimizer)
- total_features = torch.zeros(
- size=[self.batch_size * self.world_size, self.embedding_size], device=self.device)
- dist.all_gather(list(total_features.chunk(self.world_size, dim=0)), features.data)
- total_features.requires_grad = True
-
- logits = self.forward(total_features, norm_weight)
- logits = self.margin_softmax(logits, total_label)
-
- with torch.no_grad():
- max_fc = torch.max(logits, dim=1, keepdim=True)[0]
- dist.all_reduce(max_fc, dist.ReduceOp.MAX)
-
- # calculate exp(logits) and all-reduce
- logits_exp = torch.exp(logits - max_fc)
- logits_sum_exp = logits_exp.sum(dim=1, keepdims=True)
- dist.all_reduce(logits_sum_exp, dist.ReduceOp.SUM)
-
- # calculate prob
- logits_exp.div_(logits_sum_exp)
-
- # get one-hot
- grad = logits_exp
- index = torch.where(total_label != -1)[0]
- one_hot = torch.zeros(size=[index.size()[0], grad.size()[1]], device=grad.device)
- one_hot.scatter_(1, total_label[index, None], 1)
-
- # calculate loss
- loss = torch.zeros(grad.size()[0], 1, device=grad.device)
- loss[index] = grad[index].gather(1, total_label[index, None])
- dist.all_reduce(loss, dist.ReduceOp.SUM)
- loss_v = loss.clamp_min_(1e-30).log_().mean() * (-1)
-
- # calculate grad
- grad[index] -= one_hot
- grad.div_(self.batch_size * self.world_size)
-
- logits.backward(grad)
- if total_features.grad is not None:
- total_features.grad.detach_()
- x_grad: torch.Tensor = torch.zeros_like(features, requires_grad=True)
- # feature gradient all-reduce
- dist.reduce_scatter(x_grad, list(total_features.grad.chunk(self.world_size, dim=0)))
- x_grad = x_grad * self.world_size
- # backward backbone
- return x_grad, loss_v
diff --git a/spaces/Amrrs/DragGan-Inversion/training/loss.py b/spaces/Amrrs/DragGan-Inversion/training/loss.py
deleted file mode 100644
index 3b6d0833ca639bb3b08f216419dfa25f1e657da2..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/training/loss.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Loss functions."""
-
-import numpy as np
-import torch
-from torch_utils import training_stats
-from torch_utils.ops import conv2d_gradfix
-from torch_utils.ops import upfirdn2d
-
-# ----------------------------------------------------------------------------
-
-
-class Loss:
- # to be overridden by subclass
- def accumulate_gradients(self, phase, real_img, real_c, gen_z, gen_c, gain, cur_nimg):
- raise NotImplementedError()
-
-# ----------------------------------------------------------------------------
-
-
-class StyleGAN2Loss(Loss):
- def __init__(self, device, G, D, augment_pipe=None, r1_gamma=10, style_mixing_prob=0, pl_weight=0, pl_batch_shrink=2, pl_decay=0.01, pl_no_weight_grad=False, blur_init_sigma=0, blur_fade_kimg=0):
- super().__init__()
- self.device = device
- self.G = G
- self.D = D
- self.augment_pipe = augment_pipe
- self.r1_gamma = r1_gamma
- self.style_mixing_prob = style_mixing_prob
- self.pl_weight = pl_weight
- self.pl_batch_shrink = pl_batch_shrink
- self.pl_decay = pl_decay
- self.pl_no_weight_grad = pl_no_weight_grad
- self.pl_mean = torch.zeros([], device=device)
- self.blur_init_sigma = blur_init_sigma
- self.blur_fade_kimg = blur_fade_kimg
-
- def run_G(self, z, c, update_emas=False):
- ws = self.G.mapping(z, c, update_emas=update_emas)
- if self.style_mixing_prob > 0:
- with torch.autograd.profiler.record_function('style_mixing'):
- cutoff = torch.empty([], dtype=torch.int64,
- device=ws.device).random_(1, ws.shape[1])
- cutoff = torch.where(torch.rand(
- [], device=ws.device) < self.style_mixing_prob, cutoff, torch.full_like(cutoff, ws.shape[1]))
- ws[:, cutoff:] = self.G.mapping(
- torch.randn_like(z), c, update_emas=False)[:, cutoff:]
- img = self.G.synthesis(ws, update_emas=update_emas)
- return img, ws
-
- def run_D(self, img, c, blur_sigma=0, update_emas=False):
- blur_size = np.floor(blur_sigma * 3)
- if blur_size > 0:
- with torch.autograd.profiler.record_function('blur'):
- f = torch.arange(-blur_size, blur_size + 1,
- device=img.device).div(blur_sigma).square().neg().exp2()
- img = upfirdn2d.filter2d(img, f / f.sum())
- if self.augment_pipe is not None:
- img = self.augment_pipe(img)
- logits = self.D(img, c, update_emas=update_emas)
- return logits
-
- def accumulate_gradients(self, phase, real_img, real_c, gen_z, gen_c, gain, cur_nimg):
- assert phase in ['Gmain', 'Greg', 'Gboth', 'Dmain', 'Dreg', 'Dboth']
- if self.pl_weight == 0:
- phase = {'Greg': 'none', 'Gboth': 'Gmain'}.get(phase, phase)
- if self.r1_gamma == 0:
- phase = {'Dreg': 'none', 'Dboth': 'Dmain'}.get(phase, phase)
- blur_sigma = max(1 - cur_nimg / (self.blur_fade_kimg * 1e3), 0) * \
- self.blur_init_sigma if self.blur_fade_kimg > 0 else 0
-
- # Gmain: Maximize logits for generated images.
- if phase in ['Gmain', 'Gboth']:
- with torch.autograd.profiler.record_function('Gmain_forward'):
- gen_img, _gen_ws = self.run_G(gen_z, gen_c)
- gen_logits = self.run_D(gen_img, gen_c, blur_sigma=blur_sigma)
- training_stats.report('Loss/scores/fake', gen_logits)
- training_stats.report('Loss/signs/fake', gen_logits.sign())
- # -log(sigmoid(gen_logits))
- loss_Gmain = torch.nn.functional.softplus(-gen_logits)
- training_stats.report('Loss/G/loss', loss_Gmain)
- with torch.autograd.profiler.record_function('Gmain_backward'):
- loss_Gmain.mean().mul(gain).backward()
-
- # Gpl: Apply path length regularization.
- if phase in ['Greg', 'Gboth']:
- with torch.autograd.profiler.record_function('Gpl_forward'):
- batch_size = gen_z.shape[0] // self.pl_batch_shrink
- gen_img, gen_ws = self.run_G(
- gen_z[:batch_size], gen_c[:batch_size])
- pl_noise = torch.randn_like(
- gen_img) / np.sqrt(gen_img.shape[2] * gen_img.shape[3])
- with torch.autograd.profiler.record_function('pl_grads'), conv2d_gradfix.no_weight_gradients(self.pl_no_weight_grad):
- pl_grads = torch.autograd.grad(outputs=[(
- gen_img * pl_noise).sum()], inputs=[gen_ws], create_graph=True, only_inputs=True)[0]
- pl_lengths = pl_grads.square().sum(2).mean(1).sqrt()
- pl_mean = self.pl_mean.lerp(pl_lengths.mean(), self.pl_decay)
- self.pl_mean.copy_(pl_mean.detach())
- pl_penalty = (pl_lengths - pl_mean).square()
- training_stats.report('Loss/pl_penalty', pl_penalty)
- loss_Gpl = pl_penalty * self.pl_weight
- training_stats.report('Loss/G/reg', loss_Gpl)
- with torch.autograd.profiler.record_function('Gpl_backward'):
- loss_Gpl.mean().mul(gain).backward()
-
- # Dmain: Minimize logits for generated images.
- loss_Dgen = 0
- if phase in ['Dmain', 'Dboth']:
- with torch.autograd.profiler.record_function('Dgen_forward'):
- gen_img, _gen_ws = self.run_G(gen_z, gen_c, update_emas=True)
- gen_logits = self.run_D(
- gen_img, gen_c, blur_sigma=blur_sigma, update_emas=True)
- training_stats.report('Loss/scores/fake', gen_logits)
- training_stats.report('Loss/signs/fake', gen_logits.sign())
- loss_Dgen = torch.nn.functional.softplus(
- gen_logits) # -log(1 - sigmoid(gen_logits))
- with torch.autograd.profiler.record_function('Dgen_backward'):
- loss_Dgen.mean().mul(gain).backward()
-
- # Dmain: Maximize logits for real images.
- # Dr1: Apply R1 regularization.
- if phase in ['Dmain', 'Dreg', 'Dboth']:
- name = 'Dreal' if phase == 'Dmain' else 'Dr1' if phase == 'Dreg' else 'Dreal_Dr1'
- with torch.autograd.profiler.record_function(name + '_forward'):
- real_img_tmp = real_img.detach().requires_grad_(
- phase in ['Dreg', 'Dboth'])
- real_logits = self.run_D(
- real_img_tmp, real_c, blur_sigma=blur_sigma)
- training_stats.report('Loss/scores/real', real_logits)
- training_stats.report('Loss/signs/real', real_logits.sign())
-
- loss_Dreal = 0
- if phase in ['Dmain', 'Dboth']:
- # -log(sigmoid(real_logits))
- loss_Dreal = torch.nn.functional.softplus(-real_logits)
- training_stats.report(
- 'Loss/D/loss', loss_Dgen + loss_Dreal)
-
- loss_Dr1 = 0
- if phase in ['Dreg', 'Dboth']:
- with torch.autograd.profiler.record_function('r1_grads'), conv2d_gradfix.no_weight_gradients():
- r1_grads = torch.autograd.grad(outputs=[real_logits.sum()], inputs=[
- real_img_tmp], create_graph=True, only_inputs=True)[0]
- r1_penalty = r1_grads.square().sum([1, 2, 3])
- loss_Dr1 = r1_penalty * (self.r1_gamma / 2)
- training_stats.report('Loss/r1_penalty', r1_penalty)
- training_stats.report('Loss/D/reg', loss_Dr1)
-
- with torch.autograd.profiler.record_function(name + '_backward'):
- (loss_Dreal + loss_Dr1).mean().mul(gain).backward()
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/schedules/schedule_20e.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/schedules/schedule_20e.py
deleted file mode 100644
index 00e859022156dcbef6501c04d03f335639f2c1f6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/schedules/schedule_20e.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=500,
- warmup_ratio=0.001,
- step=[16, 19])
-runner = dict(type='EpochBasedRunner', max_epochs=20)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/swin/mask_rcnn_swin_small_patch4_window7_mstrain_480-800_adamw_3x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/swin/mask_rcnn_swin_small_patch4_window7_mstrain_480-800_adamw_3x_coco.py
deleted file mode 100644
index ee15134ba3f0a0788cbf4eb69cf080d01e08ddab..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/swin/mask_rcnn_swin_small_patch4_window7_mstrain_480-800_adamw_3x_coco.py
+++ /dev/null
@@ -1,80 +0,0 @@
-_base_ = [
- '../_base_/models/mask_rcnn_swin_fpn.py',
- '../_base_/datasets/coco_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- backbone=dict(
- embed_dim=96,
- depths=[2, 2, 18, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- ape=False,
- drop_path_rate=0.2,
- patch_norm=True,
- use_checkpoint=False
- ),
- neck=dict(in_channels=[96, 192, 384, 768]))
-
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-# augmentation strategy originates from DETR / Sparse RCNN
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='AutoAugment',
- policies=[
- [
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
- (736, 1333), (768, 1333), (800, 1333)],
- multiscale_mode='value',
- keep_ratio=True)
- ],
- [
- dict(type='Resize',
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomCrop',
- crop_type='absolute_range',
- crop_size=(384, 600),
- allow_negative_crop=True),
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
- (576, 1333), (608, 1333), (640, 1333),
- (672, 1333), (704, 1333), (736, 1333),
- (768, 1333), (800, 1333)],
- multiscale_mode='value',
- override=True,
- keep_ratio=True)
- ]
- ]),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-
-optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-lr_config = dict(step=[27, 33])
-runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
-
-# do not use mmdet version fp16
-fp16 = None
-optimizer_config = dict(
- type="DistOptimizerHook",
- update_interval=1,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- use_fp16=True,
-)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x1024_160k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x1024_160k_cityscapes.py
deleted file mode 100644
index 394a61c99f038c94fce58ac9c422b7c3ee4b5f50..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x1024_160k_cityscapes.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = './fcn_hr18_512x1024_160k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w48',
- backbone=dict(
- extra=dict(
- stage2=dict(num_channels=(48, 96)),
- stage3=dict(num_channels=(48, 96, 192)),
- stage4=dict(num_channels=(48, 96, 192, 384)))),
- decode_head=dict(
- in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384])))
diff --git a/spaces/Armandoliv/t5-summarize-app-scitldr/app.py b/spaces/Armandoliv/t5-summarize-app-scitldr/app.py
deleted file mode 100644
index b95922de663c3da8c38e771a79900ff19aad363a..0000000000000000000000000000000000000000
--- a/spaces/Armandoliv/t5-summarize-app-scitldr/app.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import gradio as gr
-import torch
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-
-tokenizer = AutoTokenizer.from_pretrained("Armandoliv/t5-small-summarizer-scitldr")
-
-model = AutoModelForSeq2SeqLM.from_pretrained("Armandoliv/t5-small-summarizer-scitldr")
-device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
-model = model.to(device)
-
-def main_summarizer(text):
- max_input_length = 1024
- preprocess_text = text.strip().replace("\n"," ").replace("’", "'").strip()
- tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt", truncation=True, max_length=max_input_length,).to(device)
-
- summary_ids = model.generate(
- tokenized_text,
- max_length=256,
- num_beams=8,
- repetition_penalty=3.0,
- length_penalty=2.5,
- early_stopping=False
- )
-
- output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
-
- return output
-
-inputs = [gr.Textbox(lines=10, placeholder="Text Here...", label="Input")]
-outputs = gr.Text( label="Summary")
-title="Text summarisation app"
-description = "This demo uses AI Models to summarize long text.\nIt focus on scientific texts."
-
-io = gr.Interface(fn=main_summarizer, inputs=inputs, outputs=outputs, title=title, description = description,
-
- css= """.gr-button-primary { background: -webkit-linear-gradient(
- 90deg, #355764 0%, #55a8a1 100% ) !important; background: #355764;
- background: linear-gradient(
- 90deg, #355764 0%, #55a8a1 100% ) !important;
- background: -moz-linear-gradient( 90deg, #355764 0%, #55a8a1 100% ) !important;
- background: -webkit-linear-gradient(
- 90deg, #355764 0%, #55a8a1 100% ) !important;
- color:white !important}"""
- )
-
-io.launch()
-
\ No newline at end of file
diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/commons.py b/spaces/Artrajz/vits-simple-api/bert_vits2/commons.py
deleted file mode 100644
index 970489852841b0350f945b10e1c6e572860e9da8..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/bert_vits2/commons.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/auth.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/auth.py
deleted file mode 100644
index c0efa765c853c089c6b1469e82d2e94a2d1cb5e0..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/auth.py
+++ /dev/null
@@ -1,559 +0,0 @@
-"""Network Authentication Helpers
-
-Contains interface (MultiDomainBasicAuth) and associated glue code for
-providing credentials in the context of network requests.
-"""
-import logging
-import os
-import shutil
-import subprocess
-import sysconfig
-import typing
-import urllib.parse
-from abc import ABC, abstractmethod
-from functools import lru_cache
-from os.path import commonprefix
-from pathlib import Path
-from typing import Any, Dict, List, NamedTuple, Optional, Tuple
-
-from pip._vendor.requests.auth import AuthBase, HTTPBasicAuth
-from pip._vendor.requests.models import Request, Response
-from pip._vendor.requests.utils import get_netrc_auth
-
-from pip._internal.utils.logging import getLogger
-from pip._internal.utils.misc import (
- ask,
- ask_input,
- ask_password,
- remove_auth_from_url,
- split_auth_netloc_from_url,
-)
-from pip._internal.vcs.versioncontrol import AuthInfo
-
-logger = getLogger(__name__)
-
-KEYRING_DISABLED = False
-
-
-class Credentials(NamedTuple):
- url: str
- username: str
- password: str
-
-
-class KeyRingBaseProvider(ABC):
- """Keyring base provider interface"""
-
- has_keyring: bool
-
- @abstractmethod
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- ...
-
- @abstractmethod
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- ...
-
-
-class KeyRingNullProvider(KeyRingBaseProvider):
- """Keyring null provider"""
-
- has_keyring = False
-
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- return None
-
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- return None
-
-
-class KeyRingPythonProvider(KeyRingBaseProvider):
- """Keyring interface which uses locally imported `keyring`"""
-
- has_keyring = True
-
- def __init__(self) -> None:
- import keyring
-
- self.keyring = keyring
-
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- # Support keyring's get_credential interface which supports getting
- # credentials without a username. This is only available for
- # keyring>=15.2.0.
- if hasattr(self.keyring, "get_credential"):
- logger.debug("Getting credentials from keyring for %s", url)
- cred = self.keyring.get_credential(url, username)
- if cred is not None:
- return cred.username, cred.password
- return None
-
- if username is not None:
- logger.debug("Getting password from keyring for %s", url)
- password = self.keyring.get_password(url, username)
- if password:
- return username, password
- return None
-
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- self.keyring.set_password(url, username, password)
-
-
-class KeyRingCliProvider(KeyRingBaseProvider):
- """Provider which uses `keyring` cli
-
- Instead of calling the keyring package installed alongside pip
- we call keyring on the command line which will enable pip to
- use which ever installation of keyring is available first in
- PATH.
- """
-
- has_keyring = True
-
- def __init__(self, cmd: str) -> None:
- self.keyring = cmd
-
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- # This is the default implementation of keyring.get_credential
- # https://github.com/jaraco/keyring/blob/97689324abcf01bd1793d49063e7ca01e03d7d07/keyring/backend.py#L134-L139
- if username is not None:
- password = self._get_password(url, username)
- if password is not None:
- return username, password
- return None
-
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- return self._set_password(url, username, password)
-
- def _get_password(self, service_name: str, username: str) -> Optional[str]:
- """Mirror the implementation of keyring.get_password using cli"""
- if self.keyring is None:
- return None
-
- cmd = [self.keyring, "get", service_name, username]
- env = os.environ.copy()
- env["PYTHONIOENCODING"] = "utf-8"
- res = subprocess.run(
- cmd,
- stdin=subprocess.DEVNULL,
- stdout=subprocess.PIPE,
- env=env,
- )
- if res.returncode:
- return None
- return res.stdout.decode("utf-8").strip(os.linesep)
-
- def _set_password(self, service_name: str, username: str, password: str) -> None:
- """Mirror the implementation of keyring.set_password using cli"""
- if self.keyring is None:
- return None
- env = os.environ.copy()
- env["PYTHONIOENCODING"] = "utf-8"
- subprocess.run(
- [self.keyring, "set", service_name, username],
- input=f"{password}{os.linesep}".encode("utf-8"),
- env=env,
- check=True,
- )
- return None
-
-
-@lru_cache(maxsize=None)
-def get_keyring_provider(provider: str) -> KeyRingBaseProvider:
- logger.verbose("Keyring provider requested: %s", provider)
-
- # keyring has previously failed and been disabled
- if KEYRING_DISABLED:
- provider = "disabled"
- if provider in ["import", "auto"]:
- try:
- impl = KeyRingPythonProvider()
- logger.verbose("Keyring provider set: import")
- return impl
- except ImportError:
- pass
- except Exception as exc:
- # In the event of an unexpected exception
- # we should warn the user
- msg = "Installed copy of keyring fails with exception %s"
- if provider == "auto":
- msg = msg + ", trying to find a keyring executable as a fallback"
- logger.warning(msg, exc, exc_info=logger.isEnabledFor(logging.DEBUG))
- if provider in ["subprocess", "auto"]:
- cli = shutil.which("keyring")
- if cli and cli.startswith(sysconfig.get_path("scripts")):
- # all code within this function is stolen from shutil.which implementation
- @typing.no_type_check
- def PATH_as_shutil_which_determines_it() -> str:
- path = os.environ.get("PATH", None)
- if path is None:
- try:
- path = os.confstr("CS_PATH")
- except (AttributeError, ValueError):
- # os.confstr() or CS_PATH is not available
- path = os.defpath
- # bpo-35755: Don't use os.defpath if the PATH environment variable is
- # set to an empty string
-
- return path
-
- scripts = Path(sysconfig.get_path("scripts"))
-
- paths = []
- for path in PATH_as_shutil_which_determines_it().split(os.pathsep):
- p = Path(path)
- try:
- if not p.samefile(scripts):
- paths.append(path)
- except FileNotFoundError:
- pass
-
- path = os.pathsep.join(paths)
-
- cli = shutil.which("keyring", path=path)
-
- if cli:
- logger.verbose("Keyring provider set: subprocess with executable %s", cli)
- return KeyRingCliProvider(cli)
-
- logger.verbose("Keyring provider set: disabled")
- return KeyRingNullProvider()
-
-
-class MultiDomainBasicAuth(AuthBase):
- def __init__(
- self,
- prompting: bool = True,
- index_urls: Optional[List[str]] = None,
- keyring_provider: str = "auto",
- ) -> None:
- self.prompting = prompting
- self.index_urls = index_urls
- self.keyring_provider = keyring_provider # type: ignore[assignment]
- self.passwords: Dict[str, AuthInfo] = {}
- # When the user is prompted to enter credentials and keyring is
- # available, we will offer to save them. If the user accepts,
- # this value is set to the credentials they entered. After the
- # request authenticates, the caller should call
- # ``save_credentials`` to save these.
- self._credentials_to_save: Optional[Credentials] = None
-
- @property
- def keyring_provider(self) -> KeyRingBaseProvider:
- return get_keyring_provider(self._keyring_provider)
-
- @keyring_provider.setter
- def keyring_provider(self, provider: str) -> None:
- # The free function get_keyring_provider has been decorated with
- # functools.cache. If an exception occurs in get_keyring_auth that
- # cache will be cleared and keyring disabled, take that into account
- # if you want to remove this indirection.
- self._keyring_provider = provider
-
- @property
- def use_keyring(self) -> bool:
- # We won't use keyring when --no-input is passed unless
- # a specific provider is requested because it might require
- # user interaction
- return self.prompting or self._keyring_provider not in ["auto", "disabled"]
-
- def _get_keyring_auth(
- self,
- url: Optional[str],
- username: Optional[str],
- ) -> Optional[AuthInfo]:
- """Return the tuple auth for a given url from keyring."""
- # Do nothing if no url was provided
- if not url:
- return None
-
- try:
- return self.keyring_provider.get_auth_info(url, username)
- except Exception as exc:
- logger.warning(
- "Keyring is skipped due to an exception: %s",
- str(exc),
- )
- global KEYRING_DISABLED
- KEYRING_DISABLED = True
- get_keyring_provider.cache_clear()
- return None
-
- def _get_index_url(self, url: str) -> Optional[str]:
- """Return the original index URL matching the requested URL.
-
- Cached or dynamically generated credentials may work against
- the original index URL rather than just the netloc.
-
- The provided url should have had its username and password
- removed already. If the original index url had credentials then
- they will be included in the return value.
-
- Returns None if no matching index was found, or if --no-index
- was specified by the user.
- """
- if not url or not self.index_urls:
- return None
-
- url = remove_auth_from_url(url).rstrip("/") + "/"
- parsed_url = urllib.parse.urlsplit(url)
-
- candidates = []
-
- for index in self.index_urls:
- index = index.rstrip("/") + "/"
- parsed_index = urllib.parse.urlsplit(remove_auth_from_url(index))
- if parsed_url == parsed_index:
- return index
-
- if parsed_url.netloc != parsed_index.netloc:
- continue
-
- candidate = urllib.parse.urlsplit(index)
- candidates.append(candidate)
-
- if not candidates:
- return None
-
- candidates.sort(
- reverse=True,
- key=lambda candidate: commonprefix(
- [
- parsed_url.path,
- candidate.path,
- ]
- ).rfind("/"),
- )
-
- return urllib.parse.urlunsplit(candidates[0])
-
- def _get_new_credentials(
- self,
- original_url: str,
- *,
- allow_netrc: bool = True,
- allow_keyring: bool = False,
- ) -> AuthInfo:
- """Find and return credentials for the specified URL."""
- # Split the credentials and netloc from the url.
- url, netloc, url_user_password = split_auth_netloc_from_url(
- original_url,
- )
-
- # Start with the credentials embedded in the url
- username, password = url_user_password
- if username is not None and password is not None:
- logger.debug("Found credentials in url for %s", netloc)
- return url_user_password
-
- # Find a matching index url for this request
- index_url = self._get_index_url(url)
- if index_url:
- # Split the credentials from the url.
- index_info = split_auth_netloc_from_url(index_url)
- if index_info:
- index_url, _, index_url_user_password = index_info
- logger.debug("Found index url %s", index_url)
-
- # If an index URL was found, try its embedded credentials
- if index_url and index_url_user_password[0] is not None:
- username, password = index_url_user_password
- if username is not None and password is not None:
- logger.debug("Found credentials in index url for %s", netloc)
- return index_url_user_password
-
- # Get creds from netrc if we still don't have them
- if allow_netrc:
- netrc_auth = get_netrc_auth(original_url)
- if netrc_auth:
- logger.debug("Found credentials in netrc for %s", netloc)
- return netrc_auth
-
- # If we don't have a password and keyring is available, use it.
- if allow_keyring:
- # The index url is more specific than the netloc, so try it first
- # fmt: off
- kr_auth = (
- self._get_keyring_auth(index_url, username) or
- self._get_keyring_auth(netloc, username)
- )
- # fmt: on
- if kr_auth:
- logger.debug("Found credentials in keyring for %s", netloc)
- return kr_auth
-
- return username, password
-
- def _get_url_and_credentials(
- self, original_url: str
- ) -> Tuple[str, Optional[str], Optional[str]]:
- """Return the credentials to use for the provided URL.
-
- If allowed, netrc and keyring may be used to obtain the
- correct credentials.
-
- Returns (url_without_credentials, username, password). Note
- that even if the original URL contains credentials, this
- function may return a different username and password.
- """
- url, netloc, _ = split_auth_netloc_from_url(original_url)
-
- # Try to get credentials from original url
- username, password = self._get_new_credentials(original_url)
-
- # If credentials not found, use any stored credentials for this netloc.
- # Do this if either the username or the password is missing.
- # This accounts for the situation in which the user has specified
- # the username in the index url, but the password comes from keyring.
- if (username is None or password is None) and netloc in self.passwords:
- un, pw = self.passwords[netloc]
- # It is possible that the cached credentials are for a different username,
- # in which case the cache should be ignored.
- if username is None or username == un:
- username, password = un, pw
-
- if username is not None or password is not None:
- # Convert the username and password if they're None, so that
- # this netloc will show up as "cached" in the conditional above.
- # Further, HTTPBasicAuth doesn't accept None, so it makes sense to
- # cache the value that is going to be used.
- username = username or ""
- password = password or ""
-
- # Store any acquired credentials.
- self.passwords[netloc] = (username, password)
-
- assert (
- # Credentials were found
- (username is not None and password is not None)
- # Credentials were not found
- or (username is None and password is None)
- ), f"Could not load credentials from url: {original_url}"
-
- return url, username, password
-
- def __call__(self, req: Request) -> Request:
- # Get credentials for this request
- url, username, password = self._get_url_and_credentials(req.url)
-
- # Set the url of the request to the url without any credentials
- req.url = url
-
- if username is not None and password is not None:
- # Send the basic auth with this request
- req = HTTPBasicAuth(username, password)(req)
-
- # Attach a hook to handle 401 responses
- req.register_hook("response", self.handle_401)
-
- return req
-
- # Factored out to allow for easy patching in tests
- def _prompt_for_password(
- self, netloc: str
- ) -> Tuple[Optional[str], Optional[str], bool]:
- username = ask_input(f"User for {netloc}: ") if self.prompting else None
- if not username:
- return None, None, False
- if self.use_keyring:
- auth = self._get_keyring_auth(netloc, username)
- if auth and auth[0] is not None and auth[1] is not None:
- return auth[0], auth[1], False
- password = ask_password("Password: ")
- return username, password, True
-
- # Factored out to allow for easy patching in tests
- def _should_save_password_to_keyring(self) -> bool:
- if (
- not self.prompting
- or not self.use_keyring
- or not self.keyring_provider.has_keyring
- ):
- return False
- return ask("Save credentials to keyring [y/N]: ", ["y", "n"]) == "y"
-
- def handle_401(self, resp: Response, **kwargs: Any) -> Response:
- # We only care about 401 responses, anything else we want to just
- # pass through the actual response
- if resp.status_code != 401:
- return resp
-
- username, password = None, None
-
- # Query the keyring for credentials:
- if self.use_keyring:
- username, password = self._get_new_credentials(
- resp.url,
- allow_netrc=False,
- allow_keyring=True,
- )
-
- # We are not able to prompt the user so simply return the response
- if not self.prompting and not username and not password:
- return resp
-
- parsed = urllib.parse.urlparse(resp.url)
-
- # Prompt the user for a new username and password
- save = False
- if not username and not password:
- username, password, save = self._prompt_for_password(parsed.netloc)
-
- # Store the new username and password to use for future requests
- self._credentials_to_save = None
- if username is not None and password is not None:
- self.passwords[parsed.netloc] = (username, password)
-
- # Prompt to save the password to keyring
- if save and self._should_save_password_to_keyring():
- self._credentials_to_save = Credentials(
- url=parsed.netloc,
- username=username,
- password=password,
- )
-
- # Consume content and release the original connection to allow our new
- # request to reuse the same one.
- resp.content
- resp.raw.release_conn()
-
- # Add our new username and password to the request
- req = HTTPBasicAuth(username or "", password or "")(resp.request)
- req.register_hook("response", self.warn_on_401)
-
- # On successful request, save the credentials that were used to
- # keyring. (Note that if the user responded "no" above, this member
- # is not set and nothing will be saved.)
- if self._credentials_to_save:
- req.register_hook("response", self.save_credentials)
-
- # Send our new request
- new_resp = resp.connection.send(req, **kwargs)
- new_resp.history.append(resp)
-
- return new_resp
-
- def warn_on_401(self, resp: Response, **kwargs: Any) -> None:
- """Response callback to warn about incorrect credentials."""
- if resp.status_code == 401:
- logger.warning(
- "401 Error, Credentials not correct for %s",
- resp.request.url,
- )
-
- def save_credentials(self, resp: Response, **kwargs: Any) -> None:
- """Response callback to save credentials on success."""
- assert (
- self.keyring_provider.has_keyring
- ), "should never reach here without keyring"
-
- creds = self._credentials_to_save
- self._credentials_to_save = None
- if creds and resp.status_code < 400:
- try:
- logger.info("Saving credentials to keyring")
- self.keyring_provider.save_auth_info(
- creds.url, creds.username, creds.password
- )
- except Exception:
- logger.exception("Failed to save credentials")
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/exceptions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/exceptions.py
deleted file mode 100644
index 168d07390dfc366102b8197e4b271e493bd94d11..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/exceptions.py
+++ /dev/null
@@ -1,141 +0,0 @@
-"""
-requests.exceptions
-~~~~~~~~~~~~~~~~~~~
-
-This module contains the set of Requests' exceptions.
-"""
-from pip._vendor.urllib3.exceptions import HTTPError as BaseHTTPError
-
-from .compat import JSONDecodeError as CompatJSONDecodeError
-
-
-class RequestException(IOError):
- """There was an ambiguous exception that occurred while handling your
- request.
- """
-
- def __init__(self, *args, **kwargs):
- """Initialize RequestException with `request` and `response` objects."""
- response = kwargs.pop("response", None)
- self.response = response
- self.request = kwargs.pop("request", None)
- if response is not None and not self.request and hasattr(response, "request"):
- self.request = self.response.request
- super().__init__(*args, **kwargs)
-
-
-class InvalidJSONError(RequestException):
- """A JSON error occurred."""
-
-
-class JSONDecodeError(InvalidJSONError, CompatJSONDecodeError):
- """Couldn't decode the text into json"""
-
- def __init__(self, *args, **kwargs):
- """
- Construct the JSONDecodeError instance first with all
- args. Then use it's args to construct the IOError so that
- the json specific args aren't used as IOError specific args
- and the error message from JSONDecodeError is preserved.
- """
- CompatJSONDecodeError.__init__(self, *args)
- InvalidJSONError.__init__(self, *self.args, **kwargs)
-
-
-class HTTPError(RequestException):
- """An HTTP error occurred."""
-
-
-class ConnectionError(RequestException):
- """A Connection error occurred."""
-
-
-class ProxyError(ConnectionError):
- """A proxy error occurred."""
-
-
-class SSLError(ConnectionError):
- """An SSL error occurred."""
-
-
-class Timeout(RequestException):
- """The request timed out.
-
- Catching this error will catch both
- :exc:`~requests.exceptions.ConnectTimeout` and
- :exc:`~requests.exceptions.ReadTimeout` errors.
- """
-
-
-class ConnectTimeout(ConnectionError, Timeout):
- """The request timed out while trying to connect to the remote server.
-
- Requests that produced this error are safe to retry.
- """
-
-
-class ReadTimeout(Timeout):
- """The server did not send any data in the allotted amount of time."""
-
-
-class URLRequired(RequestException):
- """A valid URL is required to make a request."""
-
-
-class TooManyRedirects(RequestException):
- """Too many redirects."""
-
-
-class MissingSchema(RequestException, ValueError):
- """The URL scheme (e.g. http or https) is missing."""
-
-
-class InvalidSchema(RequestException, ValueError):
- """The URL scheme provided is either invalid or unsupported."""
-
-
-class InvalidURL(RequestException, ValueError):
- """The URL provided was somehow invalid."""
-
-
-class InvalidHeader(RequestException, ValueError):
- """The header value provided was somehow invalid."""
-
-
-class InvalidProxyURL(InvalidURL):
- """The proxy URL provided is invalid."""
-
-
-class ChunkedEncodingError(RequestException):
- """The server declared chunked encoding but sent an invalid chunk."""
-
-
-class ContentDecodingError(RequestException, BaseHTTPError):
- """Failed to decode response content."""
-
-
-class StreamConsumedError(RequestException, TypeError):
- """The content for this response was already consumed."""
-
-
-class RetryError(RequestException):
- """Custom retries logic failed"""
-
-
-class UnrewindableBodyError(RequestException):
- """Requests encountered an error when trying to rewind a body."""
-
-
-# Warnings
-
-
-class RequestsWarning(Warning):
- """Base warning for Requests."""
-
-
-class FileModeWarning(RequestsWarning, DeprecationWarning):
- """A file was opened in text mode, but Requests determined its binary length."""
-
-
-class RequestsDependencyWarning(RequestsWarning):
- """An imported dependency doesn't match the expected version range."""
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py
deleted file mode 100644
index 1c752029b7fc64ec375a55182e5342c9eb48bb33..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from detectron2.modeling.meta_arch.fcos import FCOS, FCOSHead
-
-from .retinanet import model
-
-model._target_ = FCOS
-
-del model.anchor_generator
-del model.box2box_transform
-del model.anchor_matcher
-del model.input_format
-
-# Use P5 instead of C5 to compute P6/P7
-# (Sec 2.2 of https://arxiv.org/abs/2006.09214)
-model.backbone.top_block.in_feature = "p5"
-model.backbone.top_block.in_channels = 256
-
-# New score threshold determined based on sqrt(cls_score * centerness)
-model.test_score_thresh = 0.2
-model.test_nms_thresh = 0.6
-
-model.head._target_ = FCOSHead
-del model.head.num_anchors
-model.head.norm = "GN"
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/app.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/app.py
deleted file mode 100644
index d5a1f79c8f4b85520530af6d2b4e7b166108c6dd..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import sys, os
-
-if sys.platform == "darwin":
- os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
-
-import logging
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-logging.getLogger("markdown_it").setLevel(logging.WARNING)
-logging.getLogger("urllib3").setLevel(logging.WARNING)
-logging.getLogger("matplotlib").setLevel(logging.WARNING)
-
-logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s")
-
-logger = logging.getLogger(__name__)
-
-import torch
-import argparse
-import commons
-import utils
-from models import SynthesizerTrn
-from text.symbols import symbols
-from text import cleaned_text_to_sequence, get_bert
-from text.cleaner import clean_text
-import gradio as gr
-import webbrowser
-
-
-net_g = None
-
-
-def get_text(text, language_str, hps):
- norm_text, phone, tone, word2ph = clean_text(text, language_str)
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
-
- if hps.data.add_blank:
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
- bert = get_bert(norm_text, word2ph, language_str)
- del word2ph
-
- assert bert.shape[-1] == len(phone)
-
- phone = torch.LongTensor(phone)
- tone = torch.LongTensor(tone)
- language = torch.LongTensor(language)
-
- return bert, phone, tone, language
-
-def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid):
- global net_g
- bert, phones, tones, lang_ids = get_text(text, "ZH", hps)
- with torch.no_grad():
- x_tst=phones.to(device).unsqueeze(0)
- tones=tones.to(device).unsqueeze(0)
- lang_ids=lang_ids.to(device).unsqueeze(0)
- bert = bert.to(device).unsqueeze(0)
- x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device)
- del phones
- speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device)
- audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio
- , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy()
- del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers
- return audio
-
-def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale):
- with torch.no_grad():
- audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker)
- return "Success", (hps.data.sampling_rate, audio)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- # parser.add_argument("-m", "--model", default="./logs/dxl/G21200.pth", help="path of your model")
- parser.add_argument("-mn", "--model_name", default="xuanshen", help="path of your model")
- parser.add_argument("-m", "--model", default="null", help="path of your model")
- parser.add_argument("-c", "--config", default="./configs/config.json", help="path of your config file")
- parser.add_argument("--share", default=True, help="make link public")
- parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log")
-
- args = parser.parse_args()
- if args.debug:
- logger.info("Enable DEBUG-LEVEL log")
- logging.basicConfig(level=logging.DEBUG)
- hps = utils.get_hparams_from_file(args.config)
-
- device = (
- "cuda:0"
- if torch.cuda.is_available()
- else (
- "mps"
- if sys.platform == "darwin" and torch.backends.mps.is_available()
- else "cpu"
- )
- )
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).to(device)
- _ = net_g.eval()
-
- model_path = args.model
- if not os.path.exists(model_path) or model_path == "null":
- model_path = utils.latest_checkpoint_path(os.path.join("./logs/",args.model_name), "G_*.pth")
-
- _ = utils.load_checkpoint(model_path, net_g, None, skip_optimizer=True)
-
- speaker_ids = hps.data.spk2id
- speakers = list(speaker_ids.keys())
- with gr.Blocks() as app:
- with gr.Row():
- with gr.Column():
- gr.Markdown(value="""
- 炫神Bert-vits2语音在线生成\n
- 作者:东洋雪莲 https://space.bilibili.com/1060544882\n
- 声音归属:炫神 https://space.bilibili.com/299013902\n
- Bert-VITS2项目:https://github.com/Stardust-minus/Bert-VITS2\n
- 使用请严格遵守法律法规!\n
- 二创请标注项目链接、简介声明使用Bert-VITS2生成\n
- """)
- text = gr.TextArea(label="Text", placeholder="Input Text Here",
- value="你妈的我顶死你!")
- speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker')
- sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.1, label='SDP Ratio')
- noise_scale = gr.Slider(minimum=0.1, maximum=2, value=0.6, step=0.1, label='Noise Scale')
- noise_scale_w = gr.Slider(minimum=0.1, maximum=2, value=0.9, step=0.1, label='Noise Scale W')
- length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.1, label='Length Scale')
- btn = gr.Button("Generate!", variant="primary")
- with gr.Column():
- text_output = gr.Textbox(label="Message")
- audio_output = gr.Audio(label="Output Audio")
-
- btn.click(tts_fn,
- inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale],
- outputs=[text_output, audio_output])
-
- webbrowser.open("http://127.0.0.1:7860")
- app.launch(share=args.share)
diff --git a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537227KB.py b/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537227KB.py
deleted file mode 100644
index 9b127bc6427f5c60c8cf85603a3d8a093c3501c4..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537227KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Benson/text-generation/Examples/Bombsquad Mod Apk ltima Versin.md b/spaces/Benson/text-generation/Examples/Bombsquad Mod Apk ltima Versin.md
deleted file mode 100644
index 4c16091de8a682698f6eab7ee871082f0c269ed5..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Bombsquad Mod Apk ltima Versin.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
BombSquad Mod APK última versión: Todo lo que necesita saber
-
¿Te encanta volar cosas con tus amigos? ¿Te gusta jugar minijuegos que involucran piratas, ninjas, bárbaros y chefs locos? Si respondiste afirmativamente a cualquiera de estas preguntas, quizás quieras echar un vistazo a BombSquad, un divertido y explosivo juego multijugador que te mantendrá entretenido durante horas. Y si quieres hacer el juego aún más emocionante, se puede descargar el BombSquad Mod APK última versión, que le da acceso a todas las características desbloqueadas y recursos ilimitados. En este artículo, le diremos todo lo que necesita saber sobre BombSquad y su versión modded, incluyendo cómo descargar, instalar y jugar.
BombSquad es un juego lleno de acción que te permite volar a tus amigos en varios minijuegos que van desde la captura de la bandera de hockey. Puedes jugar con hasta 8 jugadores a nivel local o online, usando tus dispositivos como controladores. El juego cuenta con la física avanzada ragdoll, explosiones gratuitas y personajes hilarantes que te harán reír en voz alta. También puedes personalizar tu personaje con diferentes atuendos, accesorios y burlas.
-
Características de BombSquad
-
Algunas de las características que hacen de BombSquad un gran juego son:
-
-
Más de 20 minijuegos diferentes para elegir, como King of the Hill, Bomber Hockey, Capture the Flag, Epic Slow Motion Elimination y más.
-
Una variedad de mapas y entornos para explorar, como islas, castillos, estadios y barcos piratas.
-
Una amplia gama de bombas y armas para usar, como bombas pegajosas, bombas de hielo, guantes de boxeo, minas terrestres y más.
-
Una interfaz divertida y fácil de usar que te permite crear tus propios minijuegos y compartirlos con otros jugadores.
-
Una banda sonora que coincide con el estado de ánimo y la intensidad del juego.
-
-
¿Qué es BombSquad Mod APK?
-
-
BombSquad Mod APK es una versión modificada del juego original que le da algunos beneficios y ventajas adicionales. No es una aplicación oficial del desarrollador, sino una aplicación de terceros que ha sido modificada por algunas fuentes desconocidas. Al descargar e instalar BombSquad Mod APK, puede disfrutar de las siguientes características:
-
Beneficios de BombSquad Mod APK
-
Algunos de los beneficios que BombSquad Mod APK ofrece son:
-
-
Todos los caracteres están desbloqueados, por lo que puede jugar como cualquier personaje que desee.
-
Todos los minijuegos están desbloqueados, por lo que puede jugar cualquier mini-juego que desee.
-
Todos los mapas están desbloqueados, por lo que puede explorar cualquier mapa que desee.
-
Todas las bombas y armas están desbloqueadas, por lo que puede utilizar cualquier bomba o arma que desee.
-
Tienes entradas ilimitadas, que se utilizan para comprar artículos en el juego.
-
Tienes salud ilimitada, lo que significa que no morirás fácilmente en el juego.
-
No tienes anuncios, lo que significa que no serás interrumpido por anuncios molestos mientras juegas el juego.
-
-
¿Cómo descargar e instalar BombSquad Mod APK?
-
Pasos para descargar e instalar BombSquad Mod APK
-
Si desea descargar e instalar BombSquad Mod APK en su dispositivo, es necesario seguir estos pasos:
-
-
-
Ir a un sitio web de confianza que proporciona el enlace para descargar BombSquad Mod APK. Puede buscar "BombSquad Mod APK download" en Bing y elegir uno de los resultados. Por ejemplo, puede utilizar este enlace:
-
Haga clic en el botón de descarga y espere a que el archivo se descargue en su dispositivo. El tamaño del archivo es de unos 60 MB, así que asegúrese de tener suficiente espacio y una conexión a Internet estable.
-
Una vez que el archivo se descarga, ir a la configuración del dispositivo y permitir la opción de instalar aplicaciones de fuentes desconocidas. Esto es necesario porque BombSquad Mod APK no es de la Google Play Store y su dispositivo podría bloquearlo de lo contrario.
-
-
Después de la instalación se hace, puede iniciar el juego y disfrutar de BombSquad Mod APK en su dispositivo.
-
-
Consejos para evitar malware y virus
-
Si bien BombSquad Mod APK es un juego divertido y emocionante, también debe tener cuidado con la descarga e instalación de fuentes desconocidas. Algunos sitios web pueden proporcionar archivos falsos o dañados que pueden dañar su dispositivo o robar su información personal. Aquí hay algunos consejos para evitar el malware y los virus al descargar BombSquad Mod APK:
-
-
Siempre use un software antivirus confiable en su dispositivo y escanee el archivo antes de instalarlo.
-
Compruebe siempre las opiniones y calificaciones del sitio web y el archivo antes de descargarlo.
-
Siempre compare el tamaño del archivo y el nombre con el juego original y asegúrese de que coincidan.
-
Siempre copia de seguridad de los datos y el dispositivo antes de instalar cualquier aplicación modded.
-
Siempre desinstale el juego original antes de instalar la versión modificada para evitar conflictos o errores.
-
-
¿Cómo se juega BombSquad Mod APK?
-
Modos de juego y minijuegos
-
BombSquad Mod APK le ofrece una variedad de modos de juego y mini-juegos para jugar con sus amigos u otros jugadores en línea. Usted puede elegir entre:
-
-
Modo cooperativo: puedes formar equipo con otros jugadores y trabajar juntos para completar misiones y desafíos.
-
Modo versus: Puedes competir con otros jugadores e intentar derrotarlos en diferentes minijuegos.
-
Modo de torneo: Puedes unirte a un torneo e intentar ganar premios y trofeos.
-
Modo solitario: Puedes jugar solo y practicar tus habilidades o poner a prueba tus límites.
-
-
Algunos de los mini-juegos que se pueden jugar en BombSquad Mod APK son:
-
-
Nombre
Descripción
-
Captura la bandera
Tienes que capturar la bandera del enemigo y llevarla de vuelta a tu base mientras defiendes tu propia bandera.
-
-
Epic Slow Motion Elimination
Tienes que eliminar a todos los demás jugadores lanzándoles bombas mientras esquivan sus bombas en cámara lenta.
-
Ninja Fight
Tienes que luchar con otros ninjas usando espadas, shurikens y bombas mientras saltas en plataformas.
-
Pirate Plunder
Tienes que recoger tantas monedas como sea posible mientras navegas en un barco pirata y evitas las balas de cañón y los tiburones.
-
-
Controles y ajustes
-
BombSquad Mod APK tiene controles simples e intuitivos que le permiten jugar el juego con facilidad. Puede usar su dispositivo como controlador o conectar un controlador externo a través de Bluetooth o USB. También puede personalizar sus controles en el menú de configuración. Los controles básicos son:
-
-
Mover: Usa el joystick izquierdo o inclina tu dispositivo para mover a tu personaje.
-
Saltar: Pulse o pulse el botón A para saltar.
-
Pick up/Throw: Toque o pulse el botón B para recoger o lanzar bombas, armas, banderas, etc.
-
Punch: Toque o presione el botón X para perforar o usar armas.
-
Bomba: Pulse o pulse el botón Y para lanzar una bomba.
-
-
También puedes ajustar otros ajustes en el juego, como sonido, gráficos, idioma, red, etc. También puedes crear tu propio perfil y personalizar tu personaje en el juego.
-
Conclusión
-
Resumen de los puntos principales
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre BombSquad Mod APK:
-
-
Q: ¿Es BombSquad Mod APK seguro de usar?
-
A: BombSquad Mod APK es generalmente seguro de usar, pero siempre debe descargarlo desde un sitio web de confianza y escanearlo con un software antivirus antes de instalarlo. También debe realizar una copia de seguridad de sus datos y dispositivo antes de instalar cualquier aplicación modded.
-
Q: Es BombSquad Mod APK legal de usar?
-
A: BombSquad Mod APK no es una aplicación oficial del desarrollador, sino una aplicación de terceros que ha sido modificada por algunas fuentes desconocidas. Puede violar los términos y condiciones del juego original y la Google Play Store. Por lo tanto, debe usarlo bajo su propio riesgo y discreción.
-
Q: ¿Puedo jugar BombSquad Mod APK con mis amigos?
-
A: Sí, puede jugar BombSquad Mod APK con sus amigos a nivel local o en línea, utilizando sus dispositivos como controladores. También puede invitar a sus amigos a unirse a su juego o unirse a su juego.
-
Q: ¿Puedo actualizar BombSquad Mod APK?
-
A: No, no se puede actualizar BombSquad Mod APK desde la Google Play Store o el juego original. Tienes que descargar e instalar la última versión de la aplicación modded desde un sitio web de confianza cada vez que hay una actualización.
-
Q: ¿Puedo usar BombSquad Mod APK en dispositivos iOS?
-
A: No, no se puede utilizar BombSquad Mod APK en dispositivos iOS, ya que solo es compatible con dispositivos Android. Sin embargo, puede jugar el juego original en dispositivos iOS descargándolo desde la App Store.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BigChungux/Pet_Survey/README.md b/spaces/BigChungux/Pet_Survey/README.md
deleted file mode 100644
index 1fbc6b1bfd27c1ab68f8e3ec9e66f0b495319f37..0000000000000000000000000000000000000000
--- a/spaces/BigChungux/Pet_Survey/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pet Survey
-emoji: 📚
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/parallel.h b/spaces/CVPR/LIVE/parallel.h
deleted file mode 100644
index b7f9c712e471616d01921157c290a50adac768d9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/parallel.h
+++ /dev/null
@@ -1,91 +0,0 @@
-#pragma once
-
-#include "vector.h"
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-// From https://github.com/mmp/pbrt-v3/blob/master/src/core/parallel.h
-
-class Barrier {
- public:
- Barrier(int count) : count(count) { assert(count > 0); }
- ~Barrier() { assert(count == 0); }
- void Wait();
-
- private:
- std::mutex mutex;
- std::condition_variable cv;
- int count;
-};
-
-void parallel_for_host(const std::function &func,
- int64_t count,
- int chunkSize = 1);
-extern thread_local int ThreadIndex;
-void parallel_for_host(
- std::function func, const Vector2i count);
-int num_system_cores();
-
-void parallel_init();
-void parallel_cleanup();
-
-#ifdef __CUDACC__
-template
-__global__ void parallel_for_device_kernel(T functor, int count) {
- auto idx = threadIdx.x + blockIdx.x * blockDim.x;
- if (idx >= count) {
- return;
- }
- functor(idx);
-}
-template
-inline void parallel_for_device(T functor,
- int count,
- int work_per_thread = 256) {
- if (count <= 0) {
- return;
- }
- auto block_size = work_per_thread;
- auto block_count = idiv_ceil(count, block_size);
- parallel_for_device_kernel<<>>(functor, count);
-}
-#endif
-
-template
-inline void parallel_for(T functor,
- int count,
- bool use_gpu,
- int work_per_thread = -1) {
- if (work_per_thread == -1) {
- work_per_thread = use_gpu ? 64 : 256;
- }
- if (count <= 0) {
- return;
- }
- if (use_gpu) {
-#ifdef __CUDACC__
- auto block_size = work_per_thread;
- auto block_count = idiv_ceil(count, block_size);
- parallel_for_device_kernel<<>>(functor, count);
-#else
- throw std::runtime_error("diffvg not compiled with GPU");
- assert(false);
-#endif
- } else {
- auto num_threads = idiv_ceil(count, work_per_thread);
- parallel_for_host([&](int thread_index) {
- auto id_offset = work_per_thread * thread_index;
- auto work_end = std::min(id_offset + work_per_thread, count);
- for (int work_id = id_offset; work_id < work_end; work_id++) {
- auto idx = work_id;
- assert(idx < count);
- functor(idx);
- }
- }, num_threads);
- }
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/clogf.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/clogf.h
deleted file mode 100644
index 7f3314ed2635c28ff5627235525da9c1fa8709ad..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/clogf.h
+++ /dev/null
@@ -1,198 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- * Copyright 2013 Filipe RNC Maia
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*-
- * Copyright (c) 2012 Stephen Montgomery-Smith
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
- * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
- */
-
-/* adapted from FreeBSDs msun:*/
-
-#pragma once
-
-#include
-#include
-
-namespace thrust{
-namespace detail{
-namespace complex{
-
-using thrust::complex;
-
-/* round down to 8 = 24/3 bits */
-__host__ __device__ inline
-float trim(float x){
- uint32_t hx;
- get_float_word(hx, x);
- hx &= 0xffff0000;
- float ret;
- set_float_word(ret,hx);
- return ret;
-}
-
-
-__host__ __device__ inline
-complex clogf(const complex& z){
-
- // Adapted from FreeBSDs msun
- float x, y;
- float ax, ay;
- float x0, y0, x1, y1, x2, y2, t, hm1;
- float val[12];
- int i, sorted;
- const float e = 2.7182818284590452354f;
-
- x = z.real();
- y = z.imag();
-
- /* Handle NaNs using the general formula to mix them right. */
- if (x != x || y != y){
- return (complex(std::log(norm(z)), std::atan2(y, x)));
- }
-
- ax = std::abs(x);
- ay = std::abs(y);
- if (ax < ay) {
- t = ax;
- ax = ay;
- ay = t;
- }
-
- /*
- * To avoid unnecessary overflow, if x and y are very large, divide x
- * and y by M_E, and then add 1 to the logarithm. This depends on
- * M_E being larger than sqrt(2).
- * There is a potential loss of accuracy caused by dividing by M_E,
- * but this case should happen extremely rarely.
- */
- // For high values of ay -> hypotf(FLT_MAX,ay) = inf
- // We expect that for values at or below ay = 1e34f this should not happen
- if (ay > 1e34f){
- return (complex(std::log(hypotf(x / e, y / e)) + 1.0f, std::atan2(y, x)));
- }
- if (ax == 1.f) {
- if (ay < 1e-19f){
- return (complex((ay * 0.5f) * ay, std::atan2(y, x)));
- }
- return (complex(log1pf(ay * ay) * 0.5f, std::atan2(y, x)));
- }
-
- /*
- * Because atan2 and hypot conform to C99, this also covers all the
- * edge cases when x or y are 0 or infinite.
- */
- if (ax < 1e-6f || ay < 1e-6f || ax > 1e6f || ay > 1e6f){
- return (complex(std::log(hypotf(x, y)), std::atan2(y, x)));
- }
-
- /*
- * From this point on, we don't need to worry about underflow or
- * overflow in calculating ax*ax or ay*ay.
- */
-
- /* Some easy cases. */
-
- if (ax >= 1.0f){
- return (complex(log1pf((ax-1.f)*(ax+1.f) + ay*ay) * 0.5f, atan2(y, x)));
- }
-
- if (ax*ax + ay*ay <= 0.7f){
- return (complex(std::log(ax*ax + ay*ay) * 0.5f, std::atan2(y, x)));
- }
-
- /*
- * Take extra care so that ULP of real part is small if hypot(x,y) is
- * moderately close to 1.
- */
-
-
- x0 = trim(ax);
- ax = ax-x0;
- x1 = trim(ax);
- x2 = ax-x1;
- y0 = trim(ay);
- ay = ay-y0;
- y1 = trim(ay);
- y2 = ay-y1;
-
- val[0] = x0*x0;
- val[1] = y0*y0;
- val[2] = 2*x0*x1;
- val[3] = 2*y0*y1;
- val[4] = x1*x1;
- val[5] = y1*y1;
- val[6] = 2*x0*x2;
- val[7] = 2*y0*y2;
- val[8] = 2*x1*x2;
- val[9] = 2*y1*y2;
- val[10] = x2*x2;
- val[11] = y2*y2;
-
- /* Bubble sort. */
-
- do {
- sorted = 1;
- for (i=0;i<11;i++) {
- if (val[i] < val[i+1]) {
- sorted = 0;
- t = val[i];
- val[i] = val[i+1];
- val[i+1] = t;
- }
- }
- } while (!sorted);
-
- hm1 = -1;
- for (i=0;i<12;i++){
- hm1 += val[i];
- }
- return (complex(0.5f * log1pf(hm1), atan2(y, x)));
-}
-
-} // namespace complex
-
-} // namespace detail
-
-template <>
-__host__ __device__
-inline complex log(const complex& z){
- return detail::complex::clogf(z);
-}
-
-} // namespace thrust
-
diff --git a/spaces/CVPR/WALT/walt/datasets/custom.py b/spaces/CVPR/WALT/walt/datasets/custom.py
deleted file mode 100644
index 572742aa2e9c57cb6de2aac17939abf4a18216a3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/walt/datasets/custom.py
+++ /dev/null
@@ -1,324 +0,0 @@
-import os.path as osp
-import warnings
-from collections import OrderedDict
-
-import mmcv
-import numpy as np
-from mmcv.utils import print_log
-from torch.utils.data import Dataset
-
-from mmdet.core import eval_map, eval_recalls
-from .builder import DATASETS
-from .pipelines import Compose
-
-
-@DATASETS.register_module()
-class CustomDatasetLocal(Dataset):
- """Custom dataset for detection.
-
- The annotation format is shown as follows. The `ann` field is optional for
- testing.
-
- .. code-block:: none
-
- [
- {
- 'filename': 'a.jpg',
- 'width': 1280,
- 'height': 720,
- 'ann': {
- 'bboxes': (n, 4) in (x1, y1, x2, y2) order.
- 'labels': (n, ),
- 'bboxes_ignore': (k, 4), (optional field)
- 'labels_ignore': (k, 4) (optional field)
- }
- },
- ...
- ]
-
- Args:
- ann_file (str): Annotation file path.
- pipeline (list[dict]): Processing pipeline.
- classes (str | Sequence[str], optional): Specify classes to load.
- If is None, ``cls.CLASSES`` will be used. Default: None.
- data_root (str, optional): Data root for ``ann_file``,
- ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified.
- test_mode (bool, optional): If set True, annotation will not be loaded.
- filter_empty_gt (bool, optional): If set true, images without bounding
- boxes of the dataset's classes will be filtered out. This option
- only works when `test_mode=False`, i.e., we never filter images
- during tests.
- """
-
- CLASSES = None
-
- def __init__(self,
- ann_file,
- pipeline,
- classes=None,
- data_root=None,
- img_prefix='',
- seg_prefix=None,
- proposal_file=None,
- test_mode=False,
- filter_empty_gt=True):
- self.ann_file = ann_file
- self.data_root = data_root
- self.img_prefix = img_prefix
- self.seg_prefix = seg_prefix
- self.proposal_file = proposal_file
- self.test_mode = test_mode
- self.filter_empty_gt = filter_empty_gt
- self.CLASSES = self.get_classes(classes)
-
- # join paths if data_root is specified
- if self.data_root is not None:
- if not osp.isabs(self.ann_file):
- self.ann_file = osp.join(self.data_root, self.ann_file)
- if not (self.img_prefix is None or osp.isabs(self.img_prefix)):
- self.img_prefix = osp.join(self.data_root, self.img_prefix)
- if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)):
- self.seg_prefix = osp.join(self.data_root, self.seg_prefix)
- if not (self.proposal_file is None
- or osp.isabs(self.proposal_file)):
- self.proposal_file = osp.join(self.data_root,
- self.proposal_file)
- # load annotations (and proposals)
- self.data_infos = self.load_annotations(self.ann_file)
-
- if self.proposal_file is not None:
- self.proposals = self.load_proposals(self.proposal_file)
- else:
- self.proposals = None
-
- # filter images too small and containing no annotations
- if not test_mode:
- valid_inds = self._filter_imgs()
- self.data_infos = [self.data_infos[i] for i in valid_inds]
- if self.proposals is not None:
- self.proposals = [self.proposals[i] for i in valid_inds]
- # set group flag for the sampler
- self._set_group_flag()
-
- # processing pipeline
- self.pipeline = Compose(pipeline)
-
- def __len__(self):
- """Total number of samples of data."""
- return len(self.data_infos)
-
- def load_annotations(self, ann_file):
- """Load annotation from annotation file."""
- return mmcv.load(ann_file)
-
- def load_proposals(self, proposal_file):
- """Load proposal from proposal file."""
- return mmcv.load(proposal_file)
-
- def get_ann_info(self, idx):
- """Get annotation by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Annotation info of specified index.
- """
-
- return self.data_infos[idx]['ann']
-
- def get_cat_ids(self, idx):
- """Get category ids by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- list[int]: All categories in the image of specified index.
- """
-
- return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist()
-
- def pre_pipeline(self, results):
- """Prepare results dict for pipeline."""
- results['img_prefix'] = self.img_prefix
- results['seg_prefix'] = self.seg_prefix
- results['proposal_file'] = self.proposal_file
- results['bbox_fields'] = []
- results['bbox3d_fields'] = []
- results['mask_fields'] = []
- results['seg_fields'] = []
-
- def _filter_imgs(self, min_size=32):
- """Filter images too small."""
- if self.filter_empty_gt:
- warnings.warn(
- 'CustomDataset does not support filtering empty gt images.')
- valid_inds = []
- for i, img_info in enumerate(self.data_infos):
- if min(img_info['width'], img_info['height']) >= min_size:
- valid_inds.append(i)
- return valid_inds
-
- def _set_group_flag(self):
- """Set flag according to image aspect ratio.
-
- Images with aspect ratio greater than 1 will be set as group 1,
- otherwise group 0.
- """
- self.flag = np.zeros(len(self), dtype=np.uint8)
- for i in range(len(self)):
- img_info = self.data_infos[i]
- if img_info['width'] / img_info['height'] > 1:
- self.flag[i] = 1
-
- def _rand_another(self, idx):
- """Get another random index from the same group as the given index."""
- pool = np.where(self.flag == self.flag[idx])[0]
- return np.random.choice(pool)
-
- def __getitem__(self, idx):
- """Get training/test data after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Training/test data (with annotation if `test_mode` is set \
- True).
- """
-
- if self.test_mode:
- return self.prepare_test_img(idx)
- while True:
- data = self.prepare_train_img(idx)
- if data is None:
- idx = self._rand_another(idx)
- continue
- return data
-
- def prepare_train_img(self, idx):
- """Get training data and annotations after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Training data and annotation after pipeline with new keys \
- introduced by pipeline.
- """
-
- img_info = self.data_infos[idx]
- ann_info = self.get_ann_info(idx)
- results = dict(img_info=img_info, ann_info=ann_info)
- if self.proposals is not None:
- results['proposals'] = self.proposals[idx]
- self.pre_pipeline(results)
- return self.pipeline(results)
-
- def prepare_test_img(self, idx):
- """Get testing data after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Testing data after pipeline with new keys introduced by \
- pipeline.
- """
-
- img_info = self.data_infos[idx]
- results = dict(img_info=img_info)
- if self.proposals is not None:
- results['proposals'] = self.proposals[idx]
- self.pre_pipeline(results)
- return self.pipeline(results)
-
- @classmethod
- def get_classes(cls, classes=None):
- """Get class names of current dataset.
-
- Args:
- classes (Sequence[str] | str | None): If classes is None, use
- default CLASSES defined by builtin dataset. If classes is a
- string, take it as a file name. The file contains the name of
- classes where each line contains one class name. If classes is
- a tuple or list, override the CLASSES defined by the dataset.
-
- Returns:
- tuple[str] or list[str]: Names of categories of the dataset.
- """
- if classes is None:
- return cls.CLASSES
-
- if isinstance(classes, str):
- # take it as a file path
- class_names = mmcv.list_from_file(classes)
- elif isinstance(classes, (tuple, list)):
- class_names = classes
- else:
- raise ValueError(f'Unsupported type {type(classes)} of classes.')
-
- return class_names
-
- def format_results(self, results, **kwargs):
- """Place holder to format result to dataset specific output."""
-
- def evaluate(self,
- results,
- metric='mAP',
- logger=None,
- proposal_nums=(100, 300, 1000),
- iou_thr=0.5,
- scale_ranges=None):
- """Evaluate the dataset.
-
- Args:
- results (list): Testing results of the dataset.
- metric (str | list[str]): Metrics to be evaluated.
- logger (logging.Logger | None | str): Logger used for printing
- related information during evaluation. Default: None.
- proposal_nums (Sequence[int]): Proposal number used for evaluating
- recalls, such as recall@100, recall@1000.
- Default: (100, 300, 1000).
- iou_thr (float | list[float]): IoU threshold. Default: 0.5.
- scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP.
- Default: None.
- """
-
- if not isinstance(metric, str):
- assert len(metric) == 1
- metric = metric[0]
- allowed_metrics = ['mAP', 'recall']
- if metric not in allowed_metrics:
- raise KeyError(f'metric {metric} is not supported')
- annotations = [self.get_ann_info(i) for i in range(len(self))]
- eval_results = OrderedDict()
- iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr
- if metric == 'mAP':
- assert isinstance(iou_thrs, list)
- mean_aps = []
- for iou_thr in iou_thrs:
- print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}')
- mean_ap, _ = eval_map(
- results,
- annotations,
- scale_ranges=scale_ranges,
- iou_thr=iou_thr,
- dataset=self.CLASSES,
- logger=logger)
- mean_aps.append(mean_ap)
- eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3)
- eval_results['mAP'] = sum(mean_aps) / len(mean_aps)
- elif metric == 'recall':
- gt_bboxes = [ann['bboxes'] for ann in annotations]
- recalls = eval_recalls(
- gt_bboxes, results, proposal_nums, iou_thr, logger=logger)
- for i, num in enumerate(proposal_nums):
- for j, iou in enumerate(iou_thrs):
- eval_results[f'recall@{num}@{iou}'] = recalls[i, j]
- if recalls.shape[1] > 1:
- ar = recalls.mean(axis=1)
- for i, num in enumerate(proposal_nums):
- eval_results[f'AR@{num}'] = ar[i]
- return eval_results
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/samplers/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/data/samplers/__init__.py
deleted file mode 100644
index 4bacd895756cedbc9b37fe24af6dbcd8a054246b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/samplers/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .distributed_sampler import InferenceSampler, RepeatFactorTrainingSampler, TrainingSampler
-from .grouped_batch_sampler import GroupedBatchSampler
-
-__all__ = [
- "GroupedBatchSampler",
- "TrainingSampler",
- "InferenceSampler",
- "RepeatFactorTrainingSampler",
-]
diff --git a/spaces/CVPR/regionclip-demo/detectron2/evaluation/sem_seg_evaluation.py b/spaces/CVPR/regionclip-demo/detectron2/evaluation/sem_seg_evaluation.py
deleted file mode 100644
index 7a19db71562ef47569dc7f77ec616af85447f0ec..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/evaluation/sem_seg_evaluation.py
+++ /dev/null
@@ -1,184 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-import json
-import logging
-import numpy as np
-import os
-from collections import OrderedDict
-import PIL.Image as Image
-import pycocotools.mask as mask_util
-import torch
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.utils.comm import all_gather, is_main_process, synchronize
-from detectron2.utils.file_io import PathManager
-
-from .evaluator import DatasetEvaluator
-
-
-class SemSegEvaluator(DatasetEvaluator):
- """
- Evaluate semantic segmentation metrics.
- """
-
- def __init__(
- self,
- dataset_name,
- distributed=True,
- output_dir=None,
- *,
- num_classes=None,
- ignore_label=None,
- ):
- """
- Args:
- dataset_name (str): name of the dataset to be evaluated.
- distributed (bool): if True, will collect results from all ranks for evaluation.
- Otherwise, will evaluate the results in the current process.
- output_dir (str): an output directory to dump results.
- num_classes, ignore_label: deprecated argument
- """
- self._logger = logging.getLogger(__name__)
- if num_classes is not None:
- self._logger.warn(
- "SemSegEvaluator(num_classes) is deprecated! It should be obtained from metadata."
- )
- if ignore_label is not None:
- self._logger.warn(
- "SemSegEvaluator(ignore_label) is deprecated! It should be obtained from metadata."
- )
- self._dataset_name = dataset_name
- self._distributed = distributed
- self._output_dir = output_dir
-
- self._cpu_device = torch.device("cpu")
-
- self.input_file_to_gt_file = {
- dataset_record["file_name"]: dataset_record["sem_seg_file_name"]
- for dataset_record in DatasetCatalog.get(dataset_name)
- }
-
- meta = MetadataCatalog.get(dataset_name)
- # Dict that maps contiguous training ids to COCO category ids
- try:
- c2d = meta.stuff_dataset_id_to_contiguous_id
- self._contiguous_id_to_dataset_id = {v: k for k, v in c2d.items()}
- except AttributeError:
- self._contiguous_id_to_dataset_id = None
- self._class_names = meta.stuff_classes
- self._num_classes = len(meta.stuff_classes)
- if num_classes is not None:
- assert self._num_classes == num_classes, f"{self._num_classes} != {num_classes}"
- self._ignore_label = ignore_label if ignore_label is not None else meta.ignore_label
-
- def reset(self):
- self._conf_matrix = np.zeros((self._num_classes + 1, self._num_classes + 1), dtype=np.int64)
- self._predictions = []
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a model.
- It is a list of dicts. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name".
- outputs: the outputs of a model. It is either list of semantic segmentation predictions
- (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic
- segmentation prediction in the same format.
- """
- for input, output in zip(inputs, outputs):
- output = output["sem_seg"].argmax(dim=0).to(self._cpu_device)
- pred = np.array(output, dtype=np.int)
- with PathManager.open(self.input_file_to_gt_file[input["file_name"]], "rb") as f:
- gt = np.array(Image.open(f), dtype=np.int)
-
- gt[gt == self._ignore_label] = self._num_classes
-
- self._conf_matrix += np.bincount(
- (self._num_classes + 1) * pred.reshape(-1) + gt.reshape(-1),
- minlength=self._conf_matrix.size,
- ).reshape(self._conf_matrix.shape)
-
- self._predictions.extend(self.encode_json_sem_seg(pred, input["file_name"]))
-
- def evaluate(self):
- """
- Evaluates standard semantic segmentation metrics (http://cocodataset.org/#stuff-eval):
-
- * Mean intersection-over-union averaged across classes (mIoU)
- * Frequency Weighted IoU (fwIoU)
- * Mean pixel accuracy averaged across classes (mACC)
- * Pixel Accuracy (pACC)
- """
- if self._distributed:
- synchronize()
- conf_matrix_list = all_gather(self._conf_matrix)
- self._predictions = all_gather(self._predictions)
- self._predictions = list(itertools.chain(*self._predictions))
- if not is_main_process():
- return
-
- self._conf_matrix = np.zeros_like(self._conf_matrix)
- for conf_matrix in conf_matrix_list:
- self._conf_matrix += conf_matrix
-
- if self._output_dir:
- PathManager.mkdirs(self._output_dir)
- file_path = os.path.join(self._output_dir, "sem_seg_predictions.json")
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(self._predictions))
-
- acc = np.full(self._num_classes, np.nan, dtype=np.float)
- iou = np.full(self._num_classes, np.nan, dtype=np.float)
- tp = self._conf_matrix.diagonal()[:-1].astype(np.float)
- pos_gt = np.sum(self._conf_matrix[:-1, :-1], axis=0).astype(np.float)
- class_weights = pos_gt / np.sum(pos_gt)
- pos_pred = np.sum(self._conf_matrix[:-1, :-1], axis=1).astype(np.float)
- acc_valid = pos_gt > 0
- acc[acc_valid] = tp[acc_valid] / pos_gt[acc_valid]
- iou_valid = (pos_gt + pos_pred) > 0
- union = pos_gt + pos_pred - tp
- iou[acc_valid] = tp[acc_valid] / union[acc_valid]
- macc = np.sum(acc[acc_valid]) / np.sum(acc_valid)
- miou = np.sum(iou[acc_valid]) / np.sum(iou_valid)
- fiou = np.sum(iou[acc_valid] * class_weights[acc_valid])
- pacc = np.sum(tp) / np.sum(pos_gt)
-
- res = {}
- res["mIoU"] = 100 * miou
- res["fwIoU"] = 100 * fiou
- for i, name in enumerate(self._class_names):
- res["IoU-{}".format(name)] = 100 * iou[i]
- res["mACC"] = 100 * macc
- res["pACC"] = 100 * pacc
- for i, name in enumerate(self._class_names):
- res["ACC-{}".format(name)] = 100 * acc[i]
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "sem_seg_evaluation.pth")
- with PathManager.open(file_path, "wb") as f:
- torch.save(res, f)
- results = OrderedDict({"sem_seg": res})
- self._logger.info(results)
- return results
-
- def encode_json_sem_seg(self, sem_seg, input_file_name):
- """
- Convert semantic segmentation to COCO stuff format with segments encoded as RLEs.
- See http://cocodataset.org/#format-results
- """
- json_list = []
- for label in np.unique(sem_seg):
- if self._contiguous_id_to_dataset_id is not None:
- assert (
- label in self._contiguous_id_to_dataset_id
- ), "Label {} is not in the metadata info for {}".format(label, self._dataset_name)
- dataset_id = self._contiguous_id_to_dataset_id[label]
- else:
- dataset_id = int(label)
- mask = (sem_seg == label).astype(np.uint8)
- mask_rle = mask_util.encode(np.array(mask[:, :, None], order="F"))[0]
- mask_rle["counts"] = mask_rle["counts"].decode("utf-8")
- json_list.append(
- {"file_name": input_file_name, "category_id": dataset_id, "segmentation": mask_rle}
- )
- return json_list
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/proposal_utils.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/proposal_utils.py
deleted file mode 100644
index 48485a420a8afafc4097bef982c8d23b91b95269..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/proposal_utils.py
+++ /dev/null
@@ -1,200 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import math
-from typing import List, Tuple, Union
-import torch
-
-from detectron2.layers import batched_nms, cat
-from detectron2.structures import Boxes, Instances
-from detectron2.utils.env import TORCH_VERSION
-
-logger = logging.getLogger(__name__)
-
-
-def _is_tracing():
- if torch.jit.is_scripting():
- # https://github.com/pytorch/pytorch/issues/47379
- return False
- else:
- return TORCH_VERSION >= (1, 7) and torch.jit.is_tracing()
-
-
-def find_top_rpn_proposals(
- proposals: List[torch.Tensor],
- pred_objectness_logits: List[torch.Tensor],
- image_sizes: List[Tuple[int, int]],
- nms_thresh: float,
- pre_nms_topk: int,
- post_nms_topk: int,
- min_box_size: float,
- training: bool,
-):
- """
- For each feature map, select the `pre_nms_topk` highest scoring proposals,
- apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk`
- highest scoring proposals among all the feature maps for each image.
-
- Args:
- proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 4).
- All proposal predictions on the feature maps.
- pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A).
- image_sizes (list[tuple]): sizes (h, w) for each image
- nms_thresh (float): IoU threshold to use for NMS
- pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS.
- When RPN is run on multiple feature maps (as in FPN) this number is per
- feature map.
- post_nms_topk (int): number of top k scoring proposals to keep after applying NMS.
- When RPN is run on multiple feature maps (as in FPN) this number is total,
- over all feature maps.
- min_box_size (float): minimum proposal box side length in pixels (absolute units
- wrt input images).
- training (bool): True if proposals are to be used in training, otherwise False.
- This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..."
- comment.
-
- Returns:
- list[Instances]: list of N Instances. The i-th Instances
- stores post_nms_topk object proposals for image i, sorted by their
- objectness score in descending order.
- """
- num_images = len(image_sizes)
- device = proposals[0].device
-
- # 1. Select top-k anchor for every level and every image
- topk_scores = [] # #lvl Tensor, each of shape N x topk
- topk_proposals = []
- level_ids = [] # #lvl Tensor, each of shape (topk,)
- batch_idx = torch.arange(num_images, device=device)
- for level_id, (proposals_i, logits_i) in enumerate(zip(proposals, pred_objectness_logits)):
- Hi_Wi_A = logits_i.shape[1]
- if isinstance(Hi_Wi_A, torch.Tensor): # it's a tensor in tracing
- num_proposals_i = torch.clamp(Hi_Wi_A, max=pre_nms_topk)
- else:
- num_proposals_i = min(Hi_Wi_A, pre_nms_topk)
-
- # sort is faster than topk: https://github.com/pytorch/pytorch/issues/22812
- # topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1)
- logits_i, idx = logits_i.sort(descending=True, dim=1)
- topk_scores_i = logits_i.narrow(1, 0, num_proposals_i)
- topk_idx = idx.narrow(1, 0, num_proposals_i)
-
- # each is N x topk
- topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 4
-
- topk_proposals.append(topk_proposals_i)
- topk_scores.append(topk_scores_i)
- level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device))
-
- # 2. Concat all levels together
- topk_scores = cat(topk_scores, dim=1)
- topk_proposals = cat(topk_proposals, dim=1)
- level_ids = cat(level_ids, dim=0)
-
- # 3. For each image, run a per-level NMS, and choose topk results.
- results: List[Instances] = []
- for n, image_size in enumerate(image_sizes):
- boxes = Boxes(topk_proposals[n])
- scores_per_img = topk_scores[n]
- lvl = level_ids
-
- valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img)
- if not valid_mask.all():
- if training:
- raise FloatingPointError(
- "Predicted boxes or scores contain Inf/NaN. Training has diverged."
- )
- boxes = boxes[valid_mask]
- scores_per_img = scores_per_img[valid_mask]
- lvl = lvl[valid_mask]
- boxes.clip(image_size)
-
- # filter empty boxes
- keep = boxes.nonempty(threshold=min_box_size)
- if _is_tracing() or keep.sum().item() != len(boxes):
- boxes, scores_per_img, lvl = boxes[keep], scores_per_img[keep], lvl[keep]
-
- keep = batched_nms(boxes.tensor, scores_per_img, lvl, nms_thresh)
- # In Detectron1, there was different behavior during training vs. testing.
- # (https://github.com/facebookresearch/Detectron/issues/459)
- # During training, topk is over the proposals from *all* images in the training batch.
- # During testing, it is over the proposals for each image separately.
- # As a result, the training behavior becomes batch-dependent,
- # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size.
- # This bug is addressed in Detectron2 to make the behavior independent of batch size.
- keep = keep[:post_nms_topk] # keep is already sorted
-
- res = Instances(image_size)
- res.proposal_boxes = boxes[keep]
- res.objectness_logits = scores_per_img[keep]
- results.append(res)
- return results
-
-
-def add_ground_truth_to_proposals(
- gt: Union[List[Instances], List[Boxes]], proposals: List[Instances]
-) -> List[Instances]:
- """
- Call `add_ground_truth_to_proposals_single_image` for all images.
-
- Args:
- gt(Union[List[Instances], List[Boxes]): list of N elements. Element i is a Instances
- representing the ground-truth for image i.
- proposals (list[Instances]): list of N elements. Element i is a Instances
- representing the proposals for image i.
-
- Returns:
- list[Instances]: list of N Instances. Each is the proposals for the image,
- with field "proposal_boxes" and "objectness_logits".
- """
- assert gt is not None
-
- if len(proposals) != len(gt):
- raise ValueError("proposals and gt should have the same length as the number of images!")
- if len(proposals) == 0:
- return proposals
-
- return [
- add_ground_truth_to_proposals_single_image(gt_i, proposals_i)
- for gt_i, proposals_i in zip(gt, proposals)
- ]
-
-
-def add_ground_truth_to_proposals_single_image(
- gt: Union[Instances, Boxes], proposals: Instances
-) -> Instances:
- """
- Augment `proposals` with `gt`.
-
- Args:
- Same as `add_ground_truth_to_proposals`, but with gt and proposals
- per image.
-
- Returns:
- Same as `add_ground_truth_to_proposals`, but for only one image.
- """
- if isinstance(gt, Boxes):
- # convert Boxes to Instances
- gt = Instances(proposals.image_size, gt_boxes=gt)
-
- gt_boxes = gt.gt_boxes
- device = proposals.objectness_logits.device
- # Assign all ground-truth boxes an objectness logit corresponding to
- # P(object) = sigmoid(logit) =~ 1.
- gt_logit_value = math.log((1.0 - 1e-10) / (1 - (1.0 - 1e-10)))
- gt_logits = gt_logit_value * torch.ones(len(gt_boxes), device=device)
-
- # Concatenating gt_boxes with proposals requires them to have the same fields
- gt_proposal = Instances(proposals.image_size, **gt.get_fields())
- gt_proposal.proposal_boxes = gt_boxes
- gt_proposal.objectness_logits = gt_logits
-
- for key in proposals.get_fields().keys():
- assert gt_proposal.has(
- key
- ), "The attribute '{}' in `proposals` does not exist in `gt`".format(key)
-
- # NOTE: Instances.cat only use fields from the first item. Extra fields in latter items
- # will be thrown away.
- new_proposals = Instances.cat([proposals, gt_proposal])
-
- return new_proposals
diff --git a/spaces/ChenyangSi/FreeU/free_lunch_utils.py b/spaces/ChenyangSi/FreeU/free_lunch_utils.py
deleted file mode 100644
index 7e5ef4a885bd333b8c9dcfe3e8256d253bed6467..0000000000000000000000000000000000000000
--- a/spaces/ChenyangSi/FreeU/free_lunch_utils.py
+++ /dev/null
@@ -1,340 +0,0 @@
-import torch
-import torch.fft as fft
-from diffusers.models.unet_2d_condition import logger
-from diffusers.utils import is_torch_version
-from typing import Any, Dict, List, Optional, Tuple, Union
-
-
-def isinstance_str(x: object, cls_name: str):
- """
- Checks whether x has any class *named* cls_name in its ancestry.
- Doesn't require access to the class's implementation.
-
- Useful for patching!
- """
-
- for _cls in x.__class__.__mro__:
- if _cls.__name__ == cls_name:
- return True
-
- return False
-
-
-def Fourier_filter(x, threshold, scale):
- dtype = x.dtype
- x = x.type(torch.float32)
- # FFT
- x_freq = fft.fftn(x, dim=(-2, -1))
- x_freq = fft.fftshift(x_freq, dim=(-2, -1))
-
- B, C, H, W = x_freq.shape
- mask = torch.ones((B, C, H, W)).cuda()
-
- crow, ccol = H // 2, W //2
- mask[..., crow - threshold:crow + threshold, ccol - threshold:ccol + threshold] = scale
- x_freq = x_freq * mask
-
- # IFFT
- x_freq = fft.ifftshift(x_freq, dim=(-2, -1))
- x_filtered = fft.ifftn(x_freq, dim=(-2, -1)).real
-
- x_filtered = x_filtered.type(dtype)
- return x_filtered
-
-
-def register_upblock2d(model):
- def up_forward(self):
- def forward(hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
- for resnet in self.resnets:
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- #print(f"in upblock2d, hidden states shape: {hidden_states.shape}")
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs)
-
- return custom_forward
-
- if is_torch_version(">=", "1.11.0"):
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb, use_reentrant=False
- )
- else:
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- else:
- hidden_states = resnet(hidden_states, temb)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
- return forward
-
- for i, upsample_block in enumerate(model.unet.up_blocks):
- if isinstance_str(upsample_block, "UpBlock2D"):
- upsample_block.forward = up_forward(upsample_block)
-
-
-def register_free_upblock2d(model, b1=1.2, b2=1.4, s1=0.9, s2=0.2):
- def up_forward(self):
- def forward(hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
- for resnet in self.resnets:
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- #print(f"in free upblock2d, hidden states shape: {hidden_states.shape}")
-
- # # --------------- FreeU code -----------------------
- # # Only operate on the first two stages
- # if hidden_states.shape[1] == 1280:
- # hidden_states[:,:640] = hidden_states[:,:640] * self.b1
- # res_hidden_states = Fourier_filter(res_hidden_states, threshold=1, scale=self.s1)
- # if hidden_states.shape[1] == 640:
- # hidden_states[:,:320] = hidden_states[:,:320] * self.b2
- # res_hidden_states = Fourier_filter(res_hidden_states, threshold=1, scale=self.s2)
- # # ---------------------------------------------------------
-
- # --------------- FreeU code -----------------------
- # Only operate on the first two stages
- if hidden_states.shape[1] == 1280:
- hidden_mean = hidden_states.mean(1).unsqueeze(1)
- B = hidden_mean.shape[0]
- hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
- hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
-
- hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
-
- hidden_states[:,:640] = hidden_states[:,:640] * ((self.b1 - 1 ) * hidden_mean + 1)
- res_hidden_states = Fourier_filter(res_hidden_states, threshold=1, scale=self.s1)
- if hidden_states.shape[1] == 640:
- hidden_mean = hidden_states.mean(1).unsqueeze(1)
- B = hidden_mean.shape[0]
- hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
- hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
- hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
-
- hidden_states[:,:320] = hidden_states[:,:320] * ((self.b2 - 1 ) * hidden_mean + 1)
- res_hidden_states = Fourier_filter(res_hidden_states, threshold=1, scale=self.s2)
- # ---------------------------------------------------------
-
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs)
-
- return custom_forward
-
- if is_torch_version(">=", "1.11.0"):
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb, use_reentrant=False
- )
- else:
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- else:
- hidden_states = resnet(hidden_states, temb)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
- return forward
-
- for i, upsample_block in enumerate(model.unet.up_blocks):
- if isinstance_str(upsample_block, "UpBlock2D"):
- upsample_block.forward = up_forward(upsample_block)
- setattr(upsample_block, 'b1', b1)
- setattr(upsample_block, 'b2', b2)
- setattr(upsample_block, 's1', s1)
- setattr(upsample_block, 's2', s2)
-
-
-def register_crossattn_upblock2d(model):
- def up_forward(self):
- def forward(
- hidden_states: torch.FloatTensor,
- res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- upsample_size: Optional[int] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- ):
- for resnet, attn in zip(self.resnets, self.attentions):
- # pop res hidden states
- #print(f"in crossatten upblock2d, hidden states shape: {hidden_states.shape}")
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module, return_dict=None):
- def custom_forward(*inputs):
- if return_dict is not None:
- return module(*inputs, return_dict=return_dict)
- else:
- return module(*inputs)
-
- return custom_forward
-
- ckpt_kwargs: Dict[str, Any] = {"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {}
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet),
- hidden_states,
- temb,
- **ckpt_kwargs,
- )
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(attn, return_dict=False),
- hidden_states,
- encoder_hidden_states,
- None, # timestep
- None, # class_labels
- cross_attention_kwargs,
- attention_mask,
- encoder_attention_mask,
- **ckpt_kwargs,
- )[0]
- else:
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- attention_mask=attention_mask,
- encoder_attention_mask=encoder_attention_mask,
- return_dict=False,
- )[0]
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
- return forward
-
- for i, upsample_block in enumerate(model.unet.up_blocks):
- if isinstance_str(upsample_block, "CrossAttnUpBlock2D"):
- upsample_block.forward = up_forward(upsample_block)
-
-
-def register_free_crossattn_upblock2d(model, b1=1.2, b2=1.4, s1=0.9, s2=0.2):
- def up_forward(self):
- def forward(
- hidden_states: torch.FloatTensor,
- res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- upsample_size: Optional[int] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- ):
- for resnet, attn in zip(self.resnets, self.attentions):
- # pop res hidden states
- #print(f"in free crossatten upblock2d, hidden states shape: {hidden_states.shape}")
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
-
- # --------------- FreeU code -----------------------
- # Only operate on the first two stages
- if hidden_states.shape[1] == 1280:
- hidden_mean = hidden_states.mean(1).unsqueeze(1)
- B = hidden_mean.shape[0]
- hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
- hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
-
- hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
-
- hidden_states[:,:640] = hidden_states[:,:640] * ((self.b1 - 1 ) * hidden_mean + 1)
- res_hidden_states = Fourier_filter(res_hidden_states, threshold=1, scale=self.s1)
- if hidden_states.shape[1] == 640:
- hidden_mean = hidden_states.mean(1).unsqueeze(1)
- B = hidden_mean.shape[0]
- hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
- hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
- hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
-
- hidden_states[:,:320] = hidden_states[:,:320] * ((self.b2 - 1 ) * hidden_mean + 1)
- res_hidden_states = Fourier_filter(res_hidden_states, threshold=1, scale=self.s2)
- # ---------------------------------------------------------
-
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module, return_dict=None):
- def custom_forward(*inputs):
- if return_dict is not None:
- return module(*inputs, return_dict=return_dict)
- else:
- return module(*inputs)
-
- return custom_forward
-
- ckpt_kwargs: Dict[str, Any] = {"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {}
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet),
- hidden_states,
- temb,
- **ckpt_kwargs,
- )
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(attn, return_dict=False),
- hidden_states,
- encoder_hidden_states,
- None, # timestep
- None, # class_labels
- cross_attention_kwargs,
- attention_mask,
- encoder_attention_mask,
- **ckpt_kwargs,
- )[0]
- else:
- hidden_states = resnet(hidden_states, temb)
- # hidden_states = attn(
- # hidden_states,
- # encoder_hidden_states=encoder_hidden_states,
- # cross_attention_kwargs=cross_attention_kwargs,
- # encoder_attention_mask=encoder_attention_mask,
- # return_dict=False,
- # )[0]
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- )[0]
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
- return forward
-
- for i, upsample_block in enumerate(model.unet.up_blocks):
- if isinstance_str(upsample_block, "CrossAttnUpBlock2D"):
- upsample_block.forward = up_forward(upsample_block)
- setattr(upsample_block, 'b1', b1)
- setattr(upsample_block, 'b2', b2)
- setattr(upsample_block, 's1', s1)
- setattr(upsample_block, 's2', s2)
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/tz/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/tz/__init__.py
deleted file mode 100644
index af1352c47292f4eebc5cae8da45641b5544558e3..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/tz/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# -*- coding: utf-8 -*-
-from .tz import *
-from .tz import __doc__
-
-__all__ = ["tzutc", "tzoffset", "tzlocal", "tzfile", "tzrange",
- "tzstr", "tzical", "tzwin", "tzwinlocal", "gettz",
- "enfold", "datetime_ambiguous", "datetime_exists",
- "resolve_imaginary", "UTC", "DeprecatedTzFormatWarning"]
-
-
-class DeprecatedTzFormatWarning(Warning):
- """Warning raised when time zones are parsed from deprecated formats."""
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/parquet.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/parquet.py
deleted file mode 100644
index af55f8cf48e80ed81ba9abc3bff51915a5daf84c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/parquet.py
+++ /dev/null
@@ -1,551 +0,0 @@
-import io
-import json
-import warnings
-
-from .core import url_to_fs
-from .utils import merge_offset_ranges
-
-# Parquet-Specific Utilities for fsspec
-#
-# Most of the functions defined in this module are NOT
-# intended for public consumption. The only exception
-# to this is `open_parquet_file`, which should be used
-# place of `fs.open()` to open parquet-formatted files
-# on remote file systems.
-
-
-def open_parquet_file(
- path,
- mode="rb",
- fs=None,
- metadata=None,
- columns=None,
- row_groups=None,
- storage_options=None,
- strict=False,
- engine="auto",
- max_gap=64_000,
- max_block=256_000_000,
- footer_sample_size=1_000_000,
- **kwargs,
-):
- """
- Return a file-like object for a single Parquet file.
-
- The specified parquet `engine` will be used to parse the
- footer metadata, and determine the required byte ranges
- from the file. The target path will then be opened with
- the "parts" (`KnownPartsOfAFile`) caching strategy.
-
- Note that this method is intended for usage with remote
- file systems, and is unlikely to improve parquet-read
- performance on local file systems.
-
- Parameters
- ----------
- path: str
- Target file path.
- mode: str, optional
- Mode option to be passed through to `fs.open`. Default is "rb".
- metadata: Any, optional
- Parquet metadata object. Object type must be supported
- by the backend parquet engine. For now, only the "fastparquet"
- engine supports an explicit `ParquetFile` metadata object.
- If a metadata object is supplied, the remote footer metadata
- will not need to be transferred into local memory.
- fs: AbstractFileSystem, optional
- Filesystem object to use for opening the file. If nothing is
- specified, an `AbstractFileSystem` object will be inferred.
- engine : str, default "auto"
- Parquet engine to use for metadata parsing. Allowed options
- include "fastparquet", "pyarrow", and "auto". The specified
- engine must be installed in the current environment. If
- "auto" is specified, and both engines are installed,
- "fastparquet" will take precedence over "pyarrow".
- columns: list, optional
- List of all column names that may be read from the file.
- row_groups : list, optional
- List of all row-groups that may be read from the file. This
- may be a list of row-group indices (integers), or it may be
- a list of `RowGroup` metadata objects (if the "fastparquet"
- engine is used).
- storage_options : dict, optional
- Used to generate an `AbstractFileSystem` object if `fs` was
- not specified.
- strict : bool, optional
- Whether the resulting `KnownPartsOfAFile` cache should
- fetch reads that go beyond a known byte-range boundary.
- If `False` (the default), any read that ends outside a
- known part will be zero padded. Note that using
- `strict=True` may be useful for debugging.
- max_gap : int, optional
- Neighboring byte ranges will only be merged when their
- inter-range gap is <= `max_gap`. Default is 64KB.
- max_block : int, optional
- Neighboring byte ranges will only be merged when the size of
- the aggregated range is <= `max_block`. Default is 256MB.
- footer_sample_size : int, optional
- Number of bytes to read from the end of the path to look
- for the footer metadata. If the sampled bytes do not contain
- the footer, a second read request will be required, and
- performance will suffer. Default is 1MB.
- **kwargs :
- Optional key-word arguments to pass to `fs.open`
- """
-
- # Make sure we have an `AbstractFileSystem` object
- # to work with
- if fs is None:
- fs = url_to_fs(path, **(storage_options or {}))[0]
-
- # For now, `columns == []` not supported. Just use
- # default `open` command with `path` input
- if columns is not None and len(columns) == 0:
- return fs.open(path, mode=mode)
-
- # Set the engine
- engine = _set_engine(engine)
-
- # Fetch the known byte ranges needed to read
- # `columns` and/or `row_groups`
- data = _get_parquet_byte_ranges(
- [path],
- fs,
- metadata=metadata,
- columns=columns,
- row_groups=row_groups,
- engine=engine,
- max_gap=max_gap,
- max_block=max_block,
- footer_sample_size=footer_sample_size,
- )
-
- # Extract file name from `data`
- fn = next(iter(data)) if data else path
-
- # Call self.open with "parts" caching
- options = kwargs.pop("cache_options", {}).copy()
- return fs.open(
- fn,
- mode=mode,
- cache_type="parts",
- cache_options={
- **options,
- **{
- "data": data.get(fn, {}),
- "strict": strict,
- },
- },
- **kwargs,
- )
-
-
-def _get_parquet_byte_ranges(
- paths,
- fs,
- metadata=None,
- columns=None,
- row_groups=None,
- max_gap=64_000,
- max_block=256_000_000,
- footer_sample_size=1_000_000,
- engine="auto",
-):
- """Get a dictionary of the known byte ranges needed
- to read a specific column/row-group selection from a
- Parquet dataset. Each value in the output dictionary
- is intended for use as the `data` argument for the
- `KnownPartsOfAFile` caching strategy of a single path.
- """
-
- # Set engine if necessary
- if isinstance(engine, str):
- engine = _set_engine(engine)
-
- # Pass to specialized function if metadata is defined
- if metadata is not None:
-
- # Use the provided parquet metadata object
- # to avoid transferring/parsing footer metadata
- return _get_parquet_byte_ranges_from_metadata(
- metadata,
- fs,
- engine,
- columns=columns,
- row_groups=row_groups,
- max_gap=max_gap,
- max_block=max_block,
- )
-
- # Get file sizes asynchronously
- file_sizes = fs.sizes(paths)
-
- # Populate global paths, starts, & ends
- result = {}
- data_paths = []
- data_starts = []
- data_ends = []
- add_header_magic = True
- if columns is None and row_groups is None:
- # We are NOT selecting specific columns or row-groups.
- #
- # We can avoid sampling the footers, and just transfer
- # all file data with cat_ranges
- for i, path in enumerate(paths):
- result[path] = {}
- for b in range(0, file_sizes[i], max_block):
- data_paths.append(path)
- data_starts.append(b)
- data_ends.append(min(b + max_block, file_sizes[i]))
- add_header_magic = False # "Magic" should already be included
- else:
- # We ARE selecting specific columns or row-groups.
- #
- # Gather file footers.
- # We just take the last `footer_sample_size` bytes of each
- # file (or the entire file if it is smaller than that)
- footer_starts = []
- footer_ends = []
- for i, path in enumerate(paths):
- footer_ends.append(file_sizes[i])
- sample_size = max(0, file_sizes[i] - footer_sample_size)
- footer_starts.append(sample_size)
- footer_samples = fs.cat_ranges(paths, footer_starts, footer_ends)
-
- # Check our footer samples and re-sample if necessary.
- missing_footer_starts = footer_starts.copy()
- large_footer = 0
- for i, path in enumerate(paths):
- footer_size = int.from_bytes(footer_samples[i][-8:-4], "little")
- real_footer_start = file_sizes[i] - (footer_size + 8)
- if real_footer_start < footer_starts[i]:
- missing_footer_starts[i] = real_footer_start
- large_footer = max(large_footer, (footer_size + 8))
- if large_footer:
- warnings.warn(
- f"Not enough data was used to sample the parquet footer. "
- f"Try setting footer_sample_size >= {large_footer}."
- )
- for i, block in enumerate(
- fs.cat_ranges(
- paths,
- missing_footer_starts,
- footer_starts,
- )
- ):
- footer_samples[i] = block + footer_samples[i]
- footer_starts[i] = missing_footer_starts[i]
-
- # Calculate required byte ranges for each path
- for i, path in enumerate(paths):
-
- # Deal with small-file case.
- # Just include all remaining bytes of the file
- # in a single range.
- if file_sizes[i] < max_block:
- if footer_starts[i] > 0:
- # Only need to transfer the data if the
- # footer sample isn't already the whole file
- data_paths.append(path)
- data_starts.append(0)
- data_ends.append(footer_starts[i])
- continue
-
- # Use "engine" to collect data byte ranges
- path_data_starts, path_data_ends = engine._parquet_byte_ranges(
- columns,
- row_groups=row_groups,
- footer=footer_samples[i],
- footer_start=footer_starts[i],
- )
-
- data_paths += [path] * len(path_data_starts)
- data_starts += path_data_starts
- data_ends += path_data_ends
-
- # Merge adjacent offset ranges
- data_paths, data_starts, data_ends = merge_offset_ranges(
- data_paths,
- data_starts,
- data_ends,
- max_gap=max_gap,
- max_block=max_block,
- sort=False, # Should already be sorted
- )
-
- # Start by populating `result` with footer samples
- for i, path in enumerate(paths):
- result[path] = {(footer_starts[i], footer_ends[i]): footer_samples[i]}
-
- # Transfer the data byte-ranges into local memory
- _transfer_ranges(fs, result, data_paths, data_starts, data_ends)
-
- # Add b"PAR1" to header if necessary
- if add_header_magic:
- _add_header_magic(result)
-
- return result
-
-
-def _get_parquet_byte_ranges_from_metadata(
- metadata,
- fs,
- engine,
- columns=None,
- row_groups=None,
- max_gap=64_000,
- max_block=256_000_000,
-):
- """Simplified version of `_get_parquet_byte_ranges` for
- the case that an engine-specific `metadata` object is
- provided, and the remote footer metadata does not need to
- be transferred before calculating the required byte ranges.
- """
-
- # Use "engine" to collect data byte ranges
- data_paths, data_starts, data_ends = engine._parquet_byte_ranges(
- columns,
- row_groups=row_groups,
- metadata=metadata,
- )
-
- # Merge adjacent offset ranges
- data_paths, data_starts, data_ends = merge_offset_ranges(
- data_paths,
- data_starts,
- data_ends,
- max_gap=max_gap,
- max_block=max_block,
- sort=False, # Should be sorted
- )
-
- # Transfer the data byte-ranges into local memory
- result = {fn: {} for fn in list(set(data_paths))}
- _transfer_ranges(fs, result, data_paths, data_starts, data_ends)
-
- # Add b"PAR1" to header
- _add_header_magic(result)
-
- return result
-
-
-def _transfer_ranges(fs, blocks, paths, starts, ends):
- # Use cat_ranges to gather the data byte_ranges
- ranges = (paths, starts, ends)
- for path, start, stop, data in zip(*ranges, fs.cat_ranges(*ranges)):
- blocks[path][(start, stop)] = data
-
-
-def _add_header_magic(data):
- # Add b"PAR1" to file headers
- for i, path in enumerate(list(data.keys())):
- add_magic = True
- for k in data[path].keys():
- if k[0] == 0 and k[1] >= 4:
- add_magic = False
- break
- if add_magic:
- data[path][(0, 4)] = b"PAR1"
-
-
-def _set_engine(engine_str):
-
- # Define a list of parquet engines to try
- if engine_str == "auto":
- try_engines = ("fastparquet", "pyarrow")
- elif not isinstance(engine_str, str):
- raise ValueError(
- "Failed to set parquet engine! "
- "Please pass 'fastparquet', 'pyarrow', or 'auto'"
- )
- elif engine_str not in ("fastparquet", "pyarrow"):
- raise ValueError(f"{engine_str} engine not supported by `fsspec.parquet`")
- else:
- try_engines = [engine_str]
-
- # Try importing the engines in `try_engines`,
- # and choose the first one that succeeds
- for engine in try_engines:
- try:
- if engine == "fastparquet":
- return FastparquetEngine()
- elif engine == "pyarrow":
- return PyarrowEngine()
- except ImportError:
- pass
-
- # Raise an error if a supported parquet engine
- # was not found
- raise ImportError(
- f"The following parquet engines are not installed "
- f"in your python environment: {try_engines}."
- f"Please install 'fastparquert' or 'pyarrow' to "
- f"utilize the `fsspec.parquet` module."
- )
-
-
-class FastparquetEngine:
-
- # The purpose of the FastparquetEngine class is
- # to check if fastparquet can be imported (on initialization)
- # and to define a `_parquet_byte_ranges` method. In the
- # future, this class may also be used to define other
- # methods/logic that are specific to fastparquet.
-
- def __init__(self):
- import fastparquet as fp
-
- self.fp = fp
-
- def _row_group_filename(self, row_group, pf):
- return pf.row_group_filename(row_group)
-
- def _parquet_byte_ranges(
- self,
- columns,
- row_groups=None,
- metadata=None,
- footer=None,
- footer_start=None,
- ):
-
- # Initialize offset ranges and define ParqetFile metadata
- pf = metadata
- data_paths, data_starts, data_ends = [], [], []
- if pf is None:
- pf = self.fp.ParquetFile(io.BytesIO(footer))
-
- # Convert columns to a set and add any index columns
- # specified in the pandas metadata (just in case)
- column_set = None if columns is None else set(columns)
- if column_set is not None and hasattr(pf, "pandas_metadata"):
- md_index = [
- ind
- for ind in pf.pandas_metadata.get("index_columns", [])
- # Ignore RangeIndex information
- if not isinstance(ind, dict)
- ]
- column_set |= set(md_index)
-
- # Check if row_groups is a list of integers
- # or a list of row-group metadata
- if row_groups and not isinstance(row_groups[0], int):
- # Input row_groups contains row-group metadata
- row_group_indices = None
- else:
- # Input row_groups contains row-group indices
- row_group_indices = row_groups
- row_groups = pf.row_groups
-
- # Loop through column chunks to add required byte ranges
- for r, row_group in enumerate(row_groups):
- # Skip this row-group if we are targeting
- # specific row-groups
- if row_group_indices is None or r in row_group_indices:
-
- # Find the target parquet-file path for `row_group`
- fn = self._row_group_filename(row_group, pf)
-
- for column in row_group.columns:
- name = column.meta_data.path_in_schema[0]
- # Skip this column if we are targeting a
- # specific columns
- if column_set is None or name in column_set:
- file_offset0 = column.meta_data.dictionary_page_offset
- if file_offset0 is None:
- file_offset0 = column.meta_data.data_page_offset
- num_bytes = column.meta_data.total_compressed_size
- if footer_start is None or file_offset0 < footer_start:
- data_paths.append(fn)
- data_starts.append(file_offset0)
- data_ends.append(
- min(
- file_offset0 + num_bytes,
- footer_start or (file_offset0 + num_bytes),
- )
- )
-
- if metadata:
- # The metadata in this call may map to multiple
- # file paths. Need to include `data_paths`
- return data_paths, data_starts, data_ends
- return data_starts, data_ends
-
-
-class PyarrowEngine:
-
- # The purpose of the PyarrowEngine class is
- # to check if pyarrow can be imported (on initialization)
- # and to define a `_parquet_byte_ranges` method. In the
- # future, this class may also be used to define other
- # methods/logic that are specific to pyarrow.
-
- def __init__(self):
- import pyarrow.parquet as pq
-
- self.pq = pq
-
- def _row_group_filename(self, row_group, metadata):
- raise NotImplementedError
-
- def _parquet_byte_ranges(
- self,
- columns,
- row_groups=None,
- metadata=None,
- footer=None,
- footer_start=None,
- ):
-
- if metadata is not None:
- raise ValueError("metadata input not supported for PyarrowEngine")
-
- data_starts, data_ends = [], []
- md = self.pq.ParquetFile(io.BytesIO(footer)).metadata
-
- # Convert columns to a set and add any index columns
- # specified in the pandas metadata (just in case)
- column_set = None if columns is None else set(columns)
- if column_set is not None:
- schema = md.schema.to_arrow_schema()
- has_pandas_metadata = (
- schema.metadata is not None and b"pandas" in schema.metadata
- )
- if has_pandas_metadata:
- md_index = [
- ind
- for ind in json.loads(
- schema.metadata[b"pandas"].decode("utf8")
- ).get("index_columns", [])
- # Ignore RangeIndex information
- if not isinstance(ind, dict)
- ]
- column_set |= set(md_index)
-
- # Loop through column chunks to add required byte ranges
- for r in range(md.num_row_groups):
- # Skip this row-group if we are targeting
- # specific row-groups
- if row_groups is None or r in row_groups:
- row_group = md.row_group(r)
- for c in range(row_group.num_columns):
- column = row_group.column(c)
- name = column.path_in_schema
- # Skip this column if we are targeting a
- # specific columns
- split_name = name.split(".")[0]
- if (
- column_set is None
- or name in column_set
- or split_name in column_set
- ):
- file_offset0 = column.dictionary_page_offset
- if file_offset0 is None:
- file_offset0 = column.data_page_offset
- num_bytes = column.total_compressed_size
- if file_offset0 < footer_start:
- data_starts.append(file_offset0)
- data_ends.append(
- min(file_offset0 + num_bytes, footer_start)
- )
- return data_starts, data_ends
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/transformer.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/transformer.py
deleted file mode 100644
index f3d2aa093c748bbc1408491cacab153977b4a4cb..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/transformer.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from models.modules.transformer_modules import *
-
-
-class Transformer(nn.Module):
- def __init__(self, dim, depth, heads, win_size, dim_head, mlp_dim,
- dropout=0., patch_num=None, ape=None, rpe=None, rpe_pos=1):
- super().__init__()
-
- self.absolute_pos_embed = None if patch_num is None or ape is None else AbsolutePosition(dim, dropout,
- patch_num, ape)
- self.pos_dropout = nn.Dropout(dropout)
- self.layers = nn.ModuleList([])
- for _ in range(depth):
- self.layers.append(nn.ModuleList([
- PreNorm(dim, Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout, patch_num=patch_num,
- rpe=rpe, rpe_pos=rpe_pos)),
- PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout))
- ]))
-
- def forward(self, x):
- if self.absolute_pos_embed is not None:
- x = self.absolute_pos_embed(x)
- x = self.pos_dropout(x)
- for attn, ff in self.layers:
- x = attn(x) + x
- x = ff(x) + x
- return x
-
-
-if __name__ == '__main__':
- token_dim = 1024
- toke_len = 256
-
- transformer = Transformer(dim=token_dim, depth=6, heads=16,
- dim_head=64, mlp_dim=2048, dropout=0.1,
- patch_num=256, ape='lr_parameter', rpe='lr_parameter_mirror')
-
- total = sum(p.numel() for p in transformer.parameters())
- trainable = sum(p.numel() for p in transformer.parameters() if p.requires_grad)
- print('parameter total:{:,}, trainable:{:,}'.format(total, trainable))
-
- input = torch.randn(1, toke_len, token_dim)
- output = transformer(input)
- print(output.shape)
diff --git a/spaces/DevashishBhake/Face_Mask_Detection/README.md b/spaces/DevashishBhake/Face_Mask_Detection/README.md
deleted file mode 100644
index 3915737634dec6da44df673fdf09a4578c43c9ad..0000000000000000000000000000000000000000
--- a/spaces/DevashishBhake/Face_Mask_Detection/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Face Mask Detection
-emoji: 🔥
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Docfile/open_llm_leaderboard/Makefile b/spaces/Docfile/open_llm_leaderboard/Makefile
deleted file mode 100644
index b5685772804c8af4235a8504dc6752bfc9ae5d1d..0000000000000000000000000000000000000000
--- a/spaces/Docfile/open_llm_leaderboard/Makefile
+++ /dev/null
@@ -1,13 +0,0 @@
-.PHONY: style format
-
-
-style:
- python -m black --line-length 119 .
- python -m isort .
- ruff check --fix .
-
-
-quality:
- python -m black --check --line-length 119 .
- python -m isort --check-only .
- ruff check .
diff --git a/spaces/DragGan/DragGan/stylegan_human/utils/__init__.py b/spaces/DragGan/DragGan/stylegan_human/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Eddycrack864/Applio-Inference/diffq/diffq.py b/spaces/Eddycrack864/Applio-Inference/diffq/diffq.py
deleted file mode 100644
index b475ec7f55227417b014c69b5cf55033182113e1..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/diffq/diffq.py
+++ /dev/null
@@ -1,286 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Differentiable quantizer based on scaled noise injection.
-"""
-from dataclasses import dataclass
-import math
-import typing as tp
-
-import torch
-
-from .base import BaseQuantizer
-from .uniform import uniform_quantize, uniform_unquantize
-from .utils import simple_repr
-
-
-class DiffQuantizer(BaseQuantizer):
- @dataclass
- class _QuantizedParam(BaseQuantizer._QuantizedParam):
- logit: torch.nn.Parameter
-
- def __init__(self, model: torch.nn.Module, min_size: float = 0.01, float16: bool = False,
- group_size: int = 1, min_bits: float = 2, max_bits: float = 15,
- param="bits", noise="gaussian",
- init_bits: float = 8, extra_bits: float = 0, suffix: str = "_diffq",
- exclude: tp.List[str] = [], detect_bound: bool = True):
- """
- Differentiable quantizer based on scaled noise injection.
- For every parameter `p` in the model, this introduces a number of bits parameter
- `b` with the same dimensions (when group_size = 1).
- Before each forward, `p` is replaced by `p + U`
- with U uniform iid noise with range [-d/2, d/2], with `d` the uniform quantization
- step for `b` bits.
- This noise approximates the quantization noise in a differentiable manner, both
- with respect to the unquantized parameter `p` and the number of bits `b`.
-
- At eveluation (as detected with `model.eval()`), the model is replaced
- by its true quantized version, and restored when going back to training.
-
- When doing actual quantization (for serialization, or evaluation),
- the number of bits is rounded to the nearest integer, and needs to be stored along.
- This will cost a few bits per dimension. To reduce this cost, one can use `group_size`,
- which will use a single noise level for multiple weight entries.
-
- You can use the `DiffQuantizer.model_size` method to get a differentiable estimate of the
- model size in MB. You can then use this estimate as a penalty in your training loss.
-
- Args:
- model (torch.nn.Module): model to quantize
- min_size (float): minimum size in MB of a parameter to be quantized.
- float16 (bool): if a layer is smaller than min_size, should we still do float16?
- group_size (int): weight entries are groupped together to reduce the number
- of noise scales to store. This should divide the size of all parameters
- bigger than min_size.
- min_bits (float): minimal number of bits.
- max_bits (float): maximal number of bits.
- init_bits (float): initial number of bits.
- extra_bits (float): extra bits to add for actual quantization (before roundoff).
- suffix (str): suffix used for the name of the extra noise scale parameters.
- exclude (list[str]): list of patterns used to match parameters to exclude.
- For instance `['bias']` to exclude all bias terms.
- detect_bound (bool): if True, will detect bound parameters and reuse
- the same quantized tensor for both, as well as the same number of bits.
-
- ..Warning::
- You must call `model.training()` and `model.eval()` for `DiffQuantizer` work properly.
-
- """
- self.group_size = group_size
- self.min_bits = min_bits
- self.max_bits = max_bits
- self.init_bits = init_bits
- self.extra_bits = extra_bits
- self.suffix = suffix
- self.param = param
- self.noise = noise
- assert noise in ["gaussian", "uniform"]
- self._optimizer_setup = False
-
- self._min_noise = 1 / (2 ** self.max_bits - 1)
- self._max_noise = 1 / (2 ** self.min_bits - 1)
-
- assert group_size >= 0
- assert min_bits < init_bits < max_bits, \
- "init_bits must be between min_bits and max_bits excluded3"
-
- for name, _ in model.named_parameters():
- if name.endswith(suffix):
- raise RuntimeError("The model already has some noise scales parameters, "
- "maybe you used twice a DiffQuantizer on the same model?.")
-
- super().__init__(model, min_size, float16, exclude, detect_bound)
-
- def _get_bits(self, logit: torch.Tensor):
- if self.param == "noise":
- return torch.log2(1 + 1 / self._get_noise_scale(logit))
- else:
- t = torch.sigmoid(logit)
- return self.max_bits * t + (1 - t) * self.min_bits
-
- def _get_noise_scale(self, logit: torch.Tensor):
- if self.param == "noise":
- t = torch.sigmoid(logit)
- return torch.exp(t * math.log(self._min_noise) + (1 - t) * math.log(self._max_noise))
- else:
- return 1 / (2 ** self._get_bits(logit) - 1)
-
- def _register_param(self, name, param, module, other):
- if other is not None:
- return self.__class__._QuantizedParam(
- name=name, param=param, module=module, logit=other.logit, other=other)
- assert self.group_size == 0 or param.numel() % self.group_size == 0
- # we want the initial number of bits to be init_bits.
- if self.param == "noise":
- noise_scale = 1 / (2 ** self.init_bits - 1)
- t = (math.log(noise_scale) - math.log(self._max_noise)) / (
- math.log(self._min_noise) - math.log(self._max_noise))
- else:
- t = (self.init_bits - self.min_bits) / (self.max_bits - self.min_bits)
- assert 0 < t < 1
- logit = torch.logit(torch.tensor(float(t)))
- assert abs(self._get_bits(logit) - self.init_bits) < 1e-5
- if self.group_size > 0:
- nparam = param.numel() // self.group_size
- else:
- nparam = 1
- logit = torch.nn.Parameter(
- torch.full(
- (nparam,),
- logit,
- device=param.device))
- module.register_parameter(name + self.suffix, logit)
- return self.__class__._QuantizedParam(
- name=name, param=param, module=module, logit=logit, other=None)
-
- def clear_optimizer(self, optimizer: torch.optim.Optimizer):
- params = [qp.logit for qp in self._qparams]
-
- for group in optimizer.param_groups:
- new_params = []
- for q in list(group["params"]):
- matched = False
- for p in params:
- if p is q:
- matched = True
- if not matched:
- new_params.append(q)
- group["params"][:] = new_params
-
- def setup_optimizer(self, optimizer: torch.optim.Optimizer,
- lr: float = 1e-3, **kwargs):
- """
- Setup the optimizer to tune the number of bits. In particular, this will deactivate
- weight decay for the bits parameters.
-
- Args:
- optimizer (torch.Optimizer): optimizer to use.
- lr (float): specific learning rate for the bits parameters. 1e-3
- is perfect for Adam.,w
- kwargs (dict): overrides for other optimization parameters for the bits.
- """
- assert not self._optimizer_setup
- self._optimizer_setup = True
-
- params = [qp.logit for qp in self._qparams]
-
- for group in optimizer.param_groups:
- for q in list(group["params"]):
- for p in params:
- if p is q:
- raise RuntimeError("You should create the optimizer "
- "before the quantizer!")
-
- group = {"params": params, "lr": lr, "weight_decay": 0}
- group.update(kwargs)
- optimizer.add_param_group(group)
-
- def no_optimizer(self):
- """
- Call this if you do not want to use an optimizer.
- """
- self._optimizer_setup = True
-
- def check_unused(self):
- for qparam in self._qparams:
- if qparam.other is not None:
- continue
- grad = qparam.param.grad
- if grad is None or (grad == 0).all():
- if qparam.logit.grad is not None:
- qparam.logit.grad.data.zero_()
-
- def model_size(self, exact=False):
- """
- Differentiable estimate of the model size.
- The size is returned in MB.
-
- If `exact` is True, then the output is no longer differentiable but
- reflect exactly an achievable size, even without compression,
- i.e.same as returned by `naive_model_size()`.
- """
- total = super().model_size()
- subtotal = 0
- for qparam in self._qparams:
- # only count the first appearance of a Parameter
- if qparam.other is not None:
- continue
- bits = self.extra_bits + self._get_bits(qparam.logit)
- if exact:
- bits = bits.round().clamp(1, 15)
- if self.group_size == 0:
- group_size = qparam.param.numel()
- else:
- group_size = self.group_size
- subtotal += group_size * bits.sum()
- subtotal += 2 * 32 # param scale
-
- # Number of bits to represent each number of bits
- bits_bits = math.ceil(math.log2(1 + (bits.max().round().item() - self.min_bits)))
- subtotal += 8 # 8 bits for bits_bits
- subtotal += bits_bits * bits.numel()
-
- subtotal /= 2 ** 20 * 8 # bits -> MegaBytes
- return total + subtotal
-
- def true_model_size(self):
- """
- Naive model size without zlib compression.
- """
- return self.model_size(exact=True).item()
-
- def _pre_forward_train(self):
- if not self._optimizer_setup:
- raise RuntimeError("You must call `setup_optimizer()` on your optimizer "
- "before starting training.")
- for qparam in self._qparams:
- if qparam.other is not None:
- noisy = qparam.other.module._parameters[qparam.other.name]
- else:
- bits = self._get_bits(qparam.logit)[:, None]
- if self.group_size == 0:
- p_flat = qparam.param.view(-1)
- else:
- p_flat = qparam.param.view(-1, self.group_size)
- scale = p_flat.max() - p_flat.min()
- unit = 1 / (2**bits - 1)
- if self.noise == "uniform":
- noise_source = (torch.rand_like(p_flat) - 0.5)
- elif self.noise == "gaussian":
- noise_source = torch.randn_like(p_flat) / 2
- noise = scale * unit * noise_source
- noisy = p_flat + noise
- # We bypass the checks by PyTorch on parameters being leafs
- qparam.module._parameters[qparam.name] = noisy.view_as(qparam.param)
- return True
-
- def _post_forward_train(self):
- for qparam in self._qparams:
- qparam.module._parameters[qparam.name] = qparam.param
- return True
-
- def _quantize_param(self, qparam: _QuantizedParam) -> tp.Any:
- bits = self.extra_bits + self._get_bits(qparam.logit)
- bits = bits.round().clamp(1, 15)[:, None].byte()
- if self.group_size == 0:
- p = qparam.param.data.view(-1)
- else:
- p = qparam.param.data.view(-1, self.group_size)
- levels, scales = uniform_quantize(p, bits)
- return levels, scales, bits
-
- def _unquantize_param(self, qparam: _QuantizedParam, quantized: tp.Any) -> torch.Tensor:
- levels, param_scale, bits = quantized
- return uniform_unquantize(levels, param_scale, bits).view_as(qparam.param.data)
-
- def detach(self):
- super().detach()
- for qparam in self._qparams:
- delattr(qparam.module, qparam.name + self.suffix)
-
- def __repr__(self):
- return simple_repr(self)
diff --git a/spaces/Eddycrack864/Applio-Inference/easy_infer.py b/spaces/Eddycrack864/Applio-Inference/easy_infer.py
deleted file mode 100644
index 81a70d3648c38120f908cdaf2ea3bd15af9dec26..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/easy_infer.py
+++ /dev/null
@@ -1,1383 +0,0 @@
-import subprocess
-import os
-import sys
-import errno
-import shutil
-import yt_dlp
-from mega import Mega
-import datetime
-import unicodedata
-import torch
-import glob
-import gradio as gr
-import gdown
-import zipfile
-import traceback
-import json
-import mdx
-from mdx_processing_script import get_model_list,id_to_ptm,prepare_mdx,run_mdx
-import requests
-import wget
-import ffmpeg
-import hashlib
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from unidecode import unidecode
-import re
-import time
-from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
-from infer.modules.vc.pipeline import Pipeline
-VC = Pipeline
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from MDXNet import MDXNetDereverb
-from configs.config import Config
-from infer_uvr5 import _audio_pre_, _audio_pre_new
-from huggingface_hub import HfApi, list_models
-from huggingface_hub import login
-from i18n import I18nAuto
-i18n = I18nAuto()
-from bs4 import BeautifulSoup
-from sklearn.cluster import MiniBatchKMeans
-from dotenv import load_dotenv
-load_dotenv()
-config = Config()
-tmp = os.path.join(now_dir, "TEMP")
-shutil.rmtree(tmp, ignore_errors=True)
-os.environ["TEMP"] = tmp
-weight_root = os.getenv("weight_root")
-weight_uvr5_root = os.getenv("weight_uvr5_root")
-index_root = os.getenv("index_root")
-audio_root = "audios"
-names = []
-for name in os.listdir(weight_root):
- if name.endswith(".pth"):
- names.append(name)
-index_paths = []
-
-global indexes_list
-indexes_list = []
-
-audio_paths = []
-for root, dirs, files in os.walk(index_root, topdown=False):
- for name in files:
- if name.endswith(".index") and "trained" not in name:
- index_paths.append("%s\\%s" % (root, name))
-
-for root, dirs, files in os.walk(audio_root, topdown=False):
- for name in files:
- audio_paths.append("%s/%s" % (root, name))
-
-uvr5_names = []
-for name in os.listdir(weight_uvr5_root):
- if name.endswith(".pth") or "onnx" in name:
- uvr5_names.append(name.replace(".pth", ""))
-
-def calculate_md5(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-
-def format_title(title):
- formatted_title = re.sub(r'[^\w\s-]', '', title)
- formatted_title = formatted_title.replace(" ", "_")
- return formatted_title
-
-def silentremove(filename):
- try:
- os.remove(filename)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
-def get_md5(temp_folder):
- for root, subfolders, files in os.walk(temp_folder):
- for file in files:
- if not file.startswith("G_") and not file.startswith("D_") and file.endswith(".pth") and not "_G_" in file and not "_D_" in file:
- md5_hash = calculate_md5(os.path.join(root, file))
- return md5_hash
-
- return None
-
-def find_parent(search_dir, file_name):
- for dirpath, dirnames, filenames in os.walk(search_dir):
- if file_name in filenames:
- return os.path.abspath(dirpath)
- return None
-
-def find_folder_parent(search_dir, folder_name):
- for dirpath, dirnames, filenames in os.walk(search_dir):
- if folder_name in dirnames:
- return os.path.abspath(dirpath)
- return None
-
-
-
-def download_from_url(url):
- parent_path = find_folder_parent(".", "pretrained_v2")
- zips_path = os.path.join(parent_path, 'zips')
-
- if url != '':
- print(i18n("Downloading the file: ") + f"{url}")
- if "drive.google.com" in url:
- if "file/d/" in url:
- file_id = url.split("file/d/")[1].split("/")[0]
- elif "id=" in url:
- file_id = url.split("id=")[1].split("&")[0]
- else:
- return None
-
- if file_id:
- os.chdir('./zips')
- result = subprocess.run(["gdown", f"https://drive.google.com/uc?id={file_id}", "--fuzzy"], capture_output=True, text=True, encoding='utf-8')
- if "Too many users have viewed or downloaded this file recently" in str(result.stderr):
- return "too much use"
- if "Cannot retrieve the public link of the file." in str(result.stderr):
- return "private link"
- print(result.stderr)
-
- elif "/blob/" in url:
- os.chdir('./zips')
- url = url.replace("blob", "resolve")
- response = requests.get(url)
- if response.status_code == 200:
- file_name = url.split('/')[-1]
- with open(os.path.join(zips_path, file_name), "wb") as newfile:
- newfile.write(response.content)
- else:
- os.chdir(parent_path)
- elif "mega.nz" in url:
- if "#!" in url:
- file_id = url.split("#!")[1].split("!")[0]
- elif "file/" in url:
- file_id = url.split("file/")[1].split("/")[0]
- else:
- return None
- if file_id:
- m = Mega()
- m.download_url(url, zips_path)
- elif "/tree/main" in url:
- response = requests.get(url)
- soup = BeautifulSoup(response.content, 'html.parser')
- temp_url = ''
- for link in soup.find_all('a', href=True):
- if link['href'].endswith('.zip'):
- temp_url = link['href']
- break
- if temp_url:
- url = temp_url
- url = url.replace("blob", "resolve")
- if "huggingface.co" not in url:
- url = "https://huggingface.co" + url
-
- wget.download(url)
- else:
- print("No .zip file found on the page.")
- elif "cdn.discordapp.com" in url:
- file = requests.get(url)
- if file.status_code == 200:
- name = url.split('/')
- with open(os.path.join(zips_path, name[len(name)-1]), "wb") as newfile:
- newfile.write(file.content)
- else:
- return None
- elif "pixeldrain.com" in url:
- try:
- file_id = url.split("pixeldrain.com/u/")[1]
- os.chdir('./zips')
- print(file_id)
- response = requests.get(f"https://pixeldrain.com/api/file/{file_id}")
- if response.status_code == 200:
- file_name = response.headers.get("Content-Disposition").split('filename=')[-1].strip('";')
- if not os.path.exists(zips_path):
- os.makedirs(zips_path)
- with open(os.path.join(zips_path, file_name), "wb") as newfile:
- newfile.write(response.content)
- os.chdir(parent_path)
- return "downloaded"
- else:
- os.chdir(parent_path)
- return None
- except Exception as e:
- print(e)
- os.chdir(parent_path)
- return None
- else:
- os.chdir('./zips')
- wget.download(url)
-
- os.chdir(parent_path)
- print(i18n("Full download"))
- return "downloaded"
- else:
- return None
-
-class error_message(Exception):
- def __init__(self, mensaje):
- self.mensaje = mensaje
- super().__init__(mensaje)
-
-def get_vc(sid, to_return_protect0, to_return_protect1):
- global n_spk, tgt_sr, net_g, vc, cpt, version
- if sid == "" or sid == []:
- global hubert_model
- if hubert_model is not None:
- print("clean_empty_cache")
- del net_g, n_spk, vc, hubert_model, tgt_sr
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g, cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- cpt = None
- return (
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- )
- person = "%s/%s" % (weight_root, sid)
- print("loading %s" % person)
- cpt = torch.load(person, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 0:
- to_return_protect0 = to_return_protect1 = {
- "visible": False,
- "value": 0.5,
- "__type__": "update",
- }
- else:
- to_return_protect0 = {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- }
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- return (
- {"visible": True, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1,
- )
-
-def load_downloaded_model(url):
- parent_path = find_folder_parent(".", "pretrained_v2")
- try:
- infos = []
- logs_folders = ['0_gt_wavs','1_16k_wavs','2a_f0','2b-f0nsf','3_feature256','3_feature768']
- zips_path = os.path.join(parent_path, 'zips')
- unzips_path = os.path.join(parent_path, 'unzips')
- weights_path = os.path.join(parent_path, 'weights')
- logs_dir = ""
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- os.mkdir(zips_path)
- os.mkdir(unzips_path)
-
- download_file = download_from_url(url)
- if not download_file:
- print(i18n("The file could not be downloaded."))
- infos.append(i18n("The file could not be downloaded."))
- yield "\n".join(infos)
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(i18n("Too many users have recently viewed or downloaded this file"))
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- for filename in os.listdir(zips_path):
- if filename.endswith(".zip"):
- zipfile_path = os.path.join(zips_path,filename)
- print(i18n("Proceeding with the extraction..."))
- infos.append(i18n("Proceeding with the extraction..."))
- shutil.unpack_archive(zipfile_path, unzips_path, 'zip')
- model_name = os.path.basename(zipfile_path)
- logs_dir = os.path.join(parent_path,'logs', os.path.normpath(str(model_name).replace(".zip","")))
- yield "\n".join(infos)
- else:
- print(i18n("Unzip error."))
- infos.append(i18n("Unzip error."))
- yield "\n".join(infos)
-
- index_file = False
- model_file = False
- D_file = False
- G_file = False
-
- for path, subdirs, files in os.walk(unzips_path):
- for item in files:
- item_path = os.path.join(path, item)
- if not 'G_' in item and not 'D_' in item and item.endswith('.pth'):
- model_file = True
- model_name = item.replace(".pth","")
- logs_dir = os.path.join(parent_path,'logs', model_name)
- if os.path.exists(logs_dir):
- shutil.rmtree(logs_dir)
- os.mkdir(logs_dir)
- if not os.path.exists(weights_path):
- os.mkdir(weights_path)
- if os.path.exists(os.path.join(weights_path, item)):
- os.remove(os.path.join(weights_path, item))
- if os.path.exists(item_path):
- shutil.move(item_path, weights_path)
-
- if not model_file and not os.path.exists(logs_dir):
- os.mkdir(logs_dir)
- for path, subdirs, files in os.walk(unzips_path):
- for item in files:
- item_path = os.path.join(path, item)
- if item.startswith('added_') and item.endswith('.index'):
- index_file = True
- if os.path.exists(item_path):
- if os.path.exists(os.path.join(logs_dir, item)):
- os.remove(os.path.join(logs_dir, item))
- shutil.move(item_path, logs_dir)
- if item.startswith('total_fea.npy') or item.startswith('events.'):
- if os.path.exists(item_path):
- if os.path.exists(os.path.join(logs_dir, item)):
- os.remove(os.path.join(logs_dir, item))
- shutil.move(item_path, logs_dir)
-
-
- result = ""
- if model_file:
- if index_file:
- print(i18n("The model works for inference, and has the .index file."))
- infos.append("\n" + i18n("The model works for inference, and has the .index file."))
- yield "\n".join(infos)
- else:
- print(i18n("The model works for inference, but it doesn't have the .index file."))
- infos.append("\n" + i18n("The model works for inference, but it doesn't have the .index file."))
- yield "\n".join(infos)
-
- if not index_file and not model_file:
- print(i18n("No relevant file was found to upload."))
- infos.append(i18n("No relevant file was found to upload."))
- yield "\n".join(infos)
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
- os.chdir(parent_path)
- return result
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-def load_dowloaded_dataset(url):
- parent_path = find_folder_parent(".", "pretrained_v2")
- infos = []
- try:
- zips_path = os.path.join(parent_path, 'zips')
- unzips_path = os.path.join(parent_path, 'unzips')
- datasets_path = os.path.join(parent_path, 'datasets')
- audio_extenions =['wav', 'mp3', 'flac', 'ogg', 'opus',
- 'm4a', 'mp4', 'aac', 'alac', 'wma',
- 'aiff', 'webm', 'ac3']
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- if not os.path.exists(datasets_path):
- os.mkdir(datasets_path)
-
- os.mkdir(zips_path)
- os.mkdir(unzips_path)
-
- download_file = download_from_url(url)
-
- if not download_file:
- print(i18n("An error occurred downloading"))
- infos.append(i18n("An error occurred downloading"))
- yield "\n".join(infos)
- raise Exception(i18n("An error occurred downloading"))
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(i18n("Too many users have recently viewed or downloaded this file"))
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- zip_path = os.listdir(zips_path)
- foldername = ""
- for file in zip_path:
- if file.endswith('.zip'):
- file_path = os.path.join(zips_path, file)
- print("....")
- foldername = file.replace(".zip","").replace(" ","").replace("-","_")
- dataset_path = os.path.join(datasets_path, foldername)
- print(i18n("Proceeding with the extraction..."))
- infos.append(i18n("Proceeding with the extraction..."))
- yield "\n".join(infos)
- shutil.unpack_archive(file_path, unzips_path, 'zip')
- if os.path.exists(dataset_path):
- shutil.rmtree(dataset_path)
-
- os.mkdir(dataset_path)
-
- for root, subfolders, songs in os.walk(unzips_path):
- for song in songs:
- song_path = os.path.join(root, song)
- if song.endswith(tuple(audio_extenions)):
- formatted_song_name = format_title(os.path.splitext(song)[0])
- extension = os.path.splitext(song)[1]
- new_song_path = os.path.join(dataset_path, f"{formatted_song_name}{extension}")
- shutil.move(song_path, new_song_path)
- else:
- print(i18n("Unzip error."))
- infos.append(i18n("Unzip error."))
- yield "\n".join(infos)
-
-
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- print(i18n("The Dataset has been loaded successfully."))
- infos.append(i18n("The Dataset has been loaded successfully."))
- yield "\n".join(infos)
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-def save_model(modelname, save_action):
-
- parent_path = find_folder_parent(".", "pretrained_v2")
- zips_path = os.path.join(parent_path, 'zips')
- dst = os.path.join(zips_path,modelname)
- logs_path = os.path.join(parent_path, 'logs', modelname)
- weights_path = os.path.join(parent_path, 'weights', f"{modelname}.pth")
- save_folder = parent_path
- infos = []
-
- try:
- if not os.path.exists(logs_path):
- raise Exception("No model found.")
-
- if not 'content' in parent_path:
- save_folder = os.path.join(parent_path, 'RVC_Backup')
- else:
- save_folder = '/content/drive/MyDrive/RVC_Backup'
-
- infos.append(i18n("Save model"))
- yield "\n".join(infos)
-
- if not os.path.exists(save_folder):
- os.mkdir(save_folder)
- if not os.path.exists(os.path.join(save_folder, 'ManualTrainingBackup')):
- os.mkdir(os.path.join(save_folder, 'ManualTrainingBackup'))
- if not os.path.exists(os.path.join(save_folder, 'Finished')):
- os.mkdir(os.path.join(save_folder, 'Finished'))
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
-
- os.mkdir(zips_path)
- added_file = glob.glob(os.path.join(logs_path, "added_*.index"))
- d_file = glob.glob(os.path.join(logs_path, "D_*.pth"))
- g_file = glob.glob(os.path.join(logs_path, "G_*.pth"))
-
- if save_action == i18n("Choose the method"):
- raise Exception("No method choosen.")
-
- if save_action == i18n("Save all"):
- print(i18n("Save all"))
- save_folder = os.path.join(save_folder, 'ManualTrainingBackup')
- shutil.copytree(logs_path, dst)
- else:
- if not os.path.exists(dst):
- os.mkdir(dst)
-
- if save_action == i18n("Save D and G"):
- print(i18n("Save D and G"))
- save_folder = os.path.join(save_folder, 'ManualTrainingBackup')
- if len(d_file) > 0:
- shutil.copy(d_file[0], dst)
- if len(g_file) > 0:
- shutil.copy(g_file[0], dst)
-
- if len(added_file) > 0:
- shutil.copy(added_file[0], dst)
- else:
- infos.append(i18n("Saved without index..."))
-
- if save_action == i18n("Save voice"):
- print(i18n("Save voice"))
- save_folder = os.path.join(save_folder, 'Finished')
- if len(added_file) > 0:
- shutil.copy(added_file[0], dst)
- else:
- infos.append(i18n("Saved without index..."))
-
- yield "\n".join(infos)
- if not os.path.exists(weights_path):
- infos.append(i18n("Saved without inference model..."))
- else:
- shutil.copy(weights_path, dst)
-
- yield "\n".join(infos)
- infos.append("\n" + i18n("This may take a few minutes, please wait..."))
- yield "\n".join(infos)
-
- shutil.make_archive(os.path.join(zips_path,f"{modelname}"), 'zip', zips_path)
- shutil.move(os.path.join(zips_path,f"{modelname}.zip"), os.path.join(save_folder, f'{modelname}.zip'))
-
- shutil.rmtree(zips_path)
- infos.append("\n" + i18n("Model saved successfully"))
- yield "\n".join(infos)
-
- except Exception as e:
- print(e)
- if "No model found." in str(e):
- infos.append(i18n("The model you want to save does not exist, be sure to enter the correct name."))
- else:
- infos.append(i18n("An error occurred saving the model"))
-
- yield "\n".join(infos)
-
-def load_downloaded_backup(url):
- parent_path = find_folder_parent(".", "pretrained_v2")
- try:
- infos = []
- logs_folders = ['0_gt_wavs','1_16k_wavs','2a_f0','2b-f0nsf','3_feature256','3_feature768']
- zips_path = os.path.join(parent_path, 'zips')
- unzips_path = os.path.join(parent_path, 'unzips')
- weights_path = os.path.join(parent_path, 'weights')
- logs_dir = os.path.join(parent_path, 'logs')
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- os.mkdir(zips_path)
- os.mkdir(unzips_path)
-
- download_file = download_from_url(url)
- if not download_file:
- print(i18n("The file could not be downloaded."))
- infos.append(i18n("The file could not be downloaded."))
- yield "\n".join(infos)
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(i18n("Too many users have recently viewed or downloaded this file"))
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- for filename in os.listdir(zips_path):
- if filename.endswith(".zip"):
- zipfile_path = os.path.join(zips_path,filename)
- zip_dir_name = os.path.splitext(filename)[0]
- unzip_dir = unzips_path
- print(i18n("Proceeding with the extraction..."))
- infos.append(i18n("Proceeding with the extraction..."))
- shutil.unpack_archive(zipfile_path, unzip_dir, 'zip')
-
- if os.path.exists(os.path.join(unzip_dir, zip_dir_name)):
- shutil.move(os.path.join(unzip_dir, zip_dir_name), logs_dir)
- else:
- new_folder_path = os.path.join(logs_dir, zip_dir_name)
- os.mkdir(new_folder_path)
- for item_name in os.listdir(unzip_dir):
- item_path = os.path.join(unzip_dir, item_name)
- if os.path.isfile(item_path):
- shutil.move(item_path, new_folder_path)
- elif os.path.isdir(item_path):
- shutil.move(item_path, new_folder_path)
-
- yield "\n".join(infos)
- else:
- print(i18n("Unzip error."))
- infos.append(i18n("Unzip error."))
- yield "\n".join(infos)
-
- result = ""
-
- for filename in os.listdir(unzips_path):
- if filename.endswith(".zip"):
- silentremove(filename)
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(os.path.join(parent_path, 'unzips')):
- shutil.rmtree(os.path.join(parent_path, 'unzips'))
- print(i18n("The Backup has been uploaded successfully."))
- infos.append("\n" + i18n("The Backup has been uploaded successfully."))
- yield "\n".join(infos)
- os.chdir(parent_path)
- return result
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-def save_to_wav(record_button):
- if record_button is None:
- pass
- else:
- path_to_file=record_button
- new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")+'.wav'
- new_path='./audios/'+new_name
- shutil.move(path_to_file,new_path)
- return new_name
-
-
-def change_choices2():
- audio_paths=[]
- for filename in os.listdir("./audios"):
- if filename.endswith(('wav', 'mp3', 'flac', 'ogg', 'opus',
- 'm4a', 'mp4', 'aac', 'alac', 'wma',
- 'aiff', 'webm', 'ac3')):
- audio_paths.append(os.path.join('./audios',filename).replace('\\', '/'))
- return {"choices": sorted(audio_paths), "__type__": "update"}, {"__type__": "update"}
-
-
-
-
-
-def uvr(input_url, output_path, model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0, architecture):
- carpeta_a_eliminar = "yt_downloads"
- if os.path.exists(carpeta_a_eliminar) and os.path.isdir(carpeta_a_eliminar):
- for archivo in os.listdir(carpeta_a_eliminar):
- ruta_archivo = os.path.join(carpeta_a_eliminar, archivo)
- if os.path.isfile(ruta_archivo):
- os.remove(ruta_archivo)
- elif os.path.isdir(ruta_archivo):
- shutil.rmtree(ruta_archivo)
-
-
-
- ydl_opts = {
- 'no-windows-filenames': True,
- 'restrict-filenames': True,
- 'extract_audio': True,
- 'format': 'bestaudio',
- 'quiet': True,
- 'no-warnings': True,
- }
-
- try:
- print(i18n("Downloading audio from the video..."))
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- info_dict = ydl.extract_info(input_url, download=False)
- formatted_title = format_title(info_dict.get('title', 'default_title'))
- formatted_outtmpl = output_path + '/' + formatted_title + '.wav'
- ydl_opts['outtmpl'] = formatted_outtmpl
- ydl = yt_dlp.YoutubeDL(ydl_opts)
- ydl.download([input_url])
- print(i18n("Audio downloaded!"))
- except Exception as error:
- print(i18n("An error occurred:"), error)
-
- actual_directory = os.path.dirname(__file__)
-
- vocal_directory = os.path.join(actual_directory, save_root_vocal)
- instrumental_directory = os.path.join(actual_directory, save_root_ins)
-
- vocal_formatted = f"vocal_{formatted_title}.wav.reformatted.wav_10.wav"
- instrumental_formatted = f"instrument_{formatted_title}.wav.reformatted.wav_10.wav"
-
- vocal_audio_path = os.path.join(vocal_directory, vocal_formatted)
- instrumental_audio_path = os.path.join(instrumental_directory, instrumental_formatted)
-
- vocal_formatted_mdx = f"{formatted_title}_vocal_.wav"
- instrumental_formatted_mdx = f"{formatted_title}_instrument_.wav"
-
- vocal_audio_path_mdx = os.path.join(vocal_directory, vocal_formatted_mdx)
- instrumental_audio_path_mdx = os.path.join(instrumental_directory, instrumental_formatted_mdx)
-
- if architecture == "VR":
- try:
- print(i18n("Starting audio conversion... (This might take a moment)"))
- inp_root, save_root_vocal, save_root_ins = [x.strip(" ").strip('"').strip("\n").strip('"').strip(" ") for x in [inp_root, save_root_vocal, save_root_ins]]
- usable_files = [os.path.join(inp_root, file)
- for file in os.listdir(inp_root)
- if file.endswith(tuple(sup_audioext))]
-
-
- pre_fun = MDXNetDereverb(15) if model_name == "onnx_dereverb_By_FoxJoy" else (_audio_pre_ if "DeEcho" not in model_name else _audio_pre_new)(
- agg=int(agg),
- model_path=os.path.join(weight_uvr5_root, model_name + ".pth"),
- device=config.device,
- is_half=config.is_half,
- )
-
- try:
- if paths != None:
- paths = [path.name for path in paths]
- else:
- paths = usable_files
-
- except:
- traceback.print_exc()
- paths = usable_files
- print(paths)
- for path in paths:
- inp_path = os.path.join(inp_root, path)
- need_reformat, done = 1, 0
-
- try:
- info = ffmpeg.probe(inp_path, cmd="ffprobe")
- if info["streams"][0]["channels"] == 2 and info["streams"][0]["sample_rate"] == "44100":
- need_reformat = 0
- pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal, format0)
- done = 1
- except:
- traceback.print_exc()
-
- if need_reformat:
- tmp_path = f"{tmp}/{os.path.basename(inp_path)}.reformatted.wav"
- os.system(f"ffmpeg -i {inp_path} -vn -acodec pcm_s16le -ac 2 -ar 44100 {tmp_path} -y")
- inp_path = tmp_path
-
- try:
- if not done:
- pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal, format0)
- print(f"{os.path.basename(inp_path)}->Success")
- except:
- print(f"{os.path.basename(inp_path)}->{traceback.format_exc()}")
- except:
- traceback.print_exc()
- finally:
- try:
- if model_name == "onnx_dereverb_By_FoxJoy":
- del pre_fun.pred.model
- del pre_fun.pred.model_
- else:
- del pre_fun.model
-
- del pre_fun
- return i18n("Finished"), vocal_audio_path, instrumental_audio_path
- except: traceback.print_exc()
-
- if torch.cuda.is_available(): torch.cuda.empty_cache()
-
- elif architecture == "MDX":
- try:
- print(i18n("Starting audio conversion... (This might take a moment)"))
- inp_root, save_root_vocal, save_root_ins = [x.strip(" ").strip('"').strip("\n").strip('"').strip(" ") for x in [inp_root, save_root_vocal, save_root_ins]]
-
- usable_files = [os.path.join(inp_root, file)
- for file in os.listdir(inp_root)
- if file.endswith(tuple(sup_audioext))]
- try:
- if paths != None:
- paths = [path.name for path in paths]
- else:
- paths = usable_files
-
- except:
- traceback.print_exc()
- paths = usable_files
- print(paths)
- invert=True
- denoise=True
- use_custom_parameter=True
- dim_f=2048
- dim_t=256
- n_fft=7680
- use_custom_compensation=True
- compensation=1.025
- suffix = "vocal_" #@param ["Vocals", "Drums", "Bass", "Other"]{allow-input: true}
- suffix_invert = "instrument_" #@param ["Instrumental", "Drumless", "Bassless", "Instruments"]{allow-input: true}
- print_settings = True # @param{type:"boolean"}
- onnx = id_to_ptm(model_name)
- compensation = compensation if use_custom_compensation or use_custom_parameter else None
- mdx_model = prepare_mdx(onnx,use_custom_parameter, dim_f, dim_t, n_fft, compensation=compensation)
-
-
- for path in paths:
- #inp_path = os.path.join(inp_root, path)
- suffix_naming = suffix if use_custom_parameter else None
- diff_suffix_naming = suffix_invert if use_custom_parameter else None
- run_mdx(onnx, mdx_model, path, format0, diff=invert,suffix=suffix_naming,diff_suffix=diff_suffix_naming,denoise=denoise)
-
- if print_settings:
- print()
- print('[MDX-Net_Colab settings used]')
- print(f'Model used: {onnx}')
- print(f'Model MD5: {mdx.MDX.get_hash(onnx)}')
- print(f'Model parameters:')
- print(f' -dim_f: {mdx_model.dim_f}')
- print(f' -dim_t: {mdx_model.dim_t}')
- print(f' -n_fft: {mdx_model.n_fft}')
- print(f' -compensation: {mdx_model.compensation}')
- print()
- print('[Input file]')
- print('filename(s): ')
- for filename in paths:
- print(f' -{filename}')
- print(f"{os.path.basename(filename)}->Success")
- except:
- traceback.print_exc()
- finally:
- try:
- del mdx_model
- return i18n("Finished"), vocal_audio_path_mdx, instrumental_audio_path_mdx
- except: traceback.print_exc()
-
- print("clean_empty_cache")
-
- if torch.cuda.is_available(): torch.cuda.empty_cache()
-sup_audioext = {'wav', 'mp3', 'flac', 'ogg', 'opus',
- 'm4a', 'mp4', 'aac', 'alac', 'wma',
- 'aiff', 'webm', 'ac3'}
-
-def load_downloaded_audio(url):
- parent_path = find_folder_parent(".", "pretrained_v2")
- try:
- infos = []
- audios_path = os.path.join(parent_path, 'audios')
- zips_path = os.path.join(parent_path, 'zips')
-
- if not os.path.exists(audios_path):
- os.mkdir(audios_path)
-
- download_file = download_from_url(url)
- if not download_file:
- print(i18n("The file could not be downloaded."))
- infos.append(i18n("The file could not be downloaded."))
- yield "\n".join(infos)
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(i18n("Too many users have recently viewed or downloaded this file"))
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- for filename in os.listdir(zips_path):
- item_path = os.path.join(zips_path, filename)
- if item_path.split('.')[-1] in sup_audioext:
- if os.path.exists(item_path):
- shutil.move(item_path, audios_path)
-
- result = ""
- print(i18n("Audio files have been moved to the 'audios' folder."))
- infos.append(i18n("Audio files have been moved to the 'audios' folder."))
- yield "\n".join(infos)
-
- os.chdir(parent_path)
- return result
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-
-class error_message(Exception):
- def __init__(self, mensaje):
- self.mensaje = mensaje
- super().__init__(mensaje)
-
-def get_vc(sid, to_return_protect0, to_return_protect1):
- global n_spk, tgt_sr, net_g, vc, cpt, version
- if sid == "" or sid == []:
- global hubert_model
- if hubert_model is not None:
- print("clean_empty_cache")
- del net_g, n_spk, vc, hubert_model, tgt_sr
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g, cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- cpt = None
- return (
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- )
- person = "%s/%s" % (weight_root, sid)
- print("loading %s" % person)
- cpt = torch.load(person, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 0:
- to_return_protect0 = to_return_protect1 = {
- "visible": False,
- "value": 0.5,
- "__type__": "update",
- }
- else:
- to_return_protect0 = {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- }
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- return (
- {"visible": True, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1,
- )
-
-def update_model_choices(select_value):
- model_ids = get_model_list()
- model_ids_list = list(model_ids)
- if select_value == "VR":
- return {"choices": uvr5_names, "__type__": "update"}
- elif select_value == "MDX":
- return {"choices": model_ids_list, "__type__": "update"}
-
-def download_model():
- gr.Markdown(value="# " + i18n("Download Model"))
- gr.Markdown(value=i18n("It is used to download your inference models."))
- with gr.Row():
- model_url=gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- download_model_status_bar=gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- download_button=gr.Button(i18n("Download"))
- download_button.click(fn=load_downloaded_model, inputs=[model_url], outputs=[download_model_status_bar])
-
-def download_backup():
- gr.Markdown(value="# " + i18n("Download Backup"))
- gr.Markdown(value=i18n("It is used to download your training backups."))
- with gr.Row():
- model_url=gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- download_model_status_bar=gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- download_button=gr.Button(i18n("Download"))
- download_button.click(fn=load_downloaded_backup, inputs=[model_url], outputs=[download_model_status_bar])
-
-def update_dataset_list(name):
- new_datasets = []
- for foldername in os.listdir("./datasets"):
- if "." not in foldername:
- new_datasets.append(os.path.join(find_folder_parent(".","pretrained"),"datasets",foldername))
- return gr.Dropdown.update(choices=new_datasets)
-
-def download_dataset(trainset_dir4):
- gr.Markdown(value="# " + i18n("Download Dataset"))
- gr.Markdown(value=i18n("Download the dataset with the audios in a compatible format (.wav/.flac) to train your model."))
- with gr.Row():
- dataset_url=gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- load_dataset_status_bar=gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- load_dataset_button=gr.Button(i18n("Download"))
- load_dataset_button.click(fn=load_dowloaded_dataset, inputs=[dataset_url], outputs=[load_dataset_status_bar])
- load_dataset_status_bar.change(update_dataset_list, dataset_url, trainset_dir4)
-
-def download_audio():
- gr.Markdown(value="# " + i18n("Download Audio"))
- gr.Markdown(value=i18n("Download audios of any format for use in inference (recommended for mobile users)."))
- with gr.Row():
- audio_url=gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- download_audio_status_bar=gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- download_button2=gr.Button(i18n("Download"))
- download_button2.click(fn=load_downloaded_audio, inputs=[audio_url], outputs=[download_audio_status_bar])
-
-def youtube_separator():
- gr.Markdown(value="# " + i18n("Separate YouTube tracks"))
- gr.Markdown(value=i18n("Download audio from a YouTube video and automatically separate the vocal and instrumental tracks"))
- with gr.Row():
- input_url = gr.inputs.Textbox(label=i18n("Enter the YouTube link:"))
- output_path = gr.Textbox(
- label=i18n("Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):"),
- value=os.path.abspath(os.getcwd()).replace('\\', '/') + "/yt_downloads",
- visible=False,
- )
- advanced_settings_checkbox = gr.Checkbox(
- value=False,
- label=i18n("Advanced Settings"),
- interactive=True,
- )
- with gr.Row(label = i18n("Advanced Settings"), visible=False, variant='compact') as advanced_settings:
- with gr.Column():
- model_select = gr.Radio(
- label=i18n("Model Architecture:"),
- choices=["VR", "MDX"],
- value="VR",
- interactive=True,
- )
- model_choose = gr.Dropdown(label=i18n("Model: (Be aware that in some models the named vocal will be the instrumental)"),
- choices=uvr5_names,
- value="HP5_only_main_vocal"
- )
- with gr.Row():
- agg = gr.Slider(
- minimum=0,
- maximum=20,
- step=1,
- label=i18n("Vocal Extraction Aggressive"),
- value=10,
- interactive=True,
- )
- with gr.Row():
- opt_vocal_root = gr.Textbox(
- label=i18n("Specify the output folder for vocals:"), value="audios",
- )
- opt_ins_root = gr.Textbox(
- label=i18n("Specify the output folder for accompaniment:"), value="audio-others",
- )
- dir_wav_input = gr.Textbox(
- label=i18n("Enter the path of the audio folder to be processed:"),
- value=((os.getcwd()).replace('\\', '/') + "/yt_downloads"),
- visible=False,
- )
- format0 = gr.Radio(
- label=i18n("Export file format"),
- choices=["wav", "flac", "mp3", "m4a"],
- value="wav",
- visible=False,
- interactive=True,
- )
- wav_inputs = gr.File(
- file_count="multiple", label=i18n("You can also input audio files in batches. Choose one of the two options. Priority is given to reading from the folder."),
- visible=False,
- )
- model_select.change(
- fn=update_model_choices,
- inputs=model_select,
- outputs=model_choose,
- )
- with gr.Row():
- vc_output4 = gr.Textbox(label=i18n("Status:"))
- vc_output5 = gr.Audio(label=i18n("Vocal"), type='filepath')
- vc_output6 = gr.Audio(label=i18n("Instrumental"), type='filepath')
- with gr.Row():
- but2 = gr.Button(i18n("Download and Separate"))
- but2.click(
- uvr,
- [
- input_url,
- output_path,
- model_choose,
- dir_wav_input,
- opt_vocal_root,
- wav_inputs,
- opt_ins_root,
- agg,
- format0,
- model_select
- ],
- [vc_output4, vc_output5, vc_output6],
- )
- def toggle_advanced_settings(checkbox):
- return {"visible": checkbox, "__type__": "update"}
-
- advanced_settings_checkbox.change(
- fn=toggle_advanced_settings,
- inputs=[advanced_settings_checkbox],
- outputs=[advanced_settings]
- )
-
-
-def get_bark_voice():
- mensaje = """
-v2/en_speaker_0 English Male
-v2/en_speaker_1 English Male
-v2/en_speaker_2 English Male
-v2/en_speaker_3 English Male
-v2/en_speaker_4 English Male
-v2/en_speaker_5 English Male
-v2/en_speaker_6 English Male
-v2/en_speaker_7 English Male
-v2/en_speaker_8 English Male
-v2/en_speaker_9 English Female
-v2/zh_speaker_0 Chinese (Simplified) Male
-v2/zh_speaker_1 Chinese (Simplified) Male
-v2/zh_speaker_2 Chinese (Simplified) Male
-v2/zh_speaker_3 Chinese (Simplified) Male
-v2/zh_speaker_4 Chinese (Simplified) Female
-v2/zh_speaker_5 Chinese (Simplified) Male
-v2/zh_speaker_6 Chinese (Simplified) Female
-v2/zh_speaker_7 Chinese (Simplified) Female
-v2/zh_speaker_8 Chinese (Simplified) Male
-v2/zh_speaker_9 Chinese (Simplified) Female
-v2/fr_speaker_0 French Male
-v2/fr_speaker_1 French Female
-v2/fr_speaker_2 French Female
-v2/fr_speaker_3 French Male
-v2/fr_speaker_4 French Male
-v2/fr_speaker_5 French Female
-v2/fr_speaker_6 French Male
-v2/fr_speaker_7 French Male
-v2/fr_speaker_8 French Male
-v2/fr_speaker_9 French Male
-v2/de_speaker_0 German Male
-v2/de_speaker_1 German Male
-v2/de_speaker_2 German Male
-v2/de_speaker_3 German Female
-v2/de_speaker_4 German Male
-v2/de_speaker_5 German Male
-v2/de_speaker_6 German Male
-v2/de_speaker_7 German Male
-v2/de_speaker_8 German Female
-v2/de_speaker_9 German Male
-v2/hi_speaker_0 Hindi Female
-v2/hi_speaker_1 Hindi Female
-v2/hi_speaker_2 Hindi Male
-v2/hi_speaker_3 Hindi Female
-v2/hi_speaker_4 Hindi Female
-v2/hi_speaker_5 Hindi Male
-v2/hi_speaker_6 Hindi Male
-v2/hi_speaker_7 Hindi Male
-v2/hi_speaker_8 Hindi Male
-v2/hi_speaker_9 Hindi Female
-v2/it_speaker_0 Italian Male
-v2/it_speaker_1 Italian Male
-v2/it_speaker_2 Italian Female
-v2/it_speaker_3 Italian Male
-v2/it_speaker_4 Italian Male
-v2/it_speaker_5 Italian Male
-v2/it_speaker_6 Italian Male
-v2/it_speaker_7 Italian Female
-v2/it_speaker_8 Italian Male
-v2/it_speaker_9 Italian Female
-v2/ja_speaker_0 Japanese Female
-v2/ja_speaker_1 Japanese Female
-v2/ja_speaker_2 Japanese Male
-v2/ja_speaker_3 Japanese Female
-v2/ja_speaker_4 Japanese Female
-v2/ja_speaker_5 Japanese Female
-v2/ja_speaker_6 Japanese Male
-v2/ja_speaker_7 Japanese Female
-v2/ja_speaker_8 Japanese Female
-v2/ja_speaker_9 Japanese Female
-v2/ko_speaker_0 Korean Female
-v2/ko_speaker_1 Korean Male
-v2/ko_speaker_2 Korean Male
-v2/ko_speaker_3 Korean Male
-v2/ko_speaker_4 Korean Male
-v2/ko_speaker_5 Korean Male
-v2/ko_speaker_6 Korean Male
-v2/ko_speaker_7 Korean Male
-v2/ko_speaker_8 Korean Male
-v2/ko_speaker_9 Korean Male
-v2/pl_speaker_0 Polish Male
-v2/pl_speaker_1 Polish Male
-v2/pl_speaker_2 Polish Male
-v2/pl_speaker_3 Polish Male
-v2/pl_speaker_4 Polish Female
-v2/pl_speaker_5 Polish Male
-v2/pl_speaker_6 Polish Female
-v2/pl_speaker_7 Polish Male
-v2/pl_speaker_8 Polish Male
-v2/pl_speaker_9 Polish Female
-v2/pt_speaker_0 Portuguese Male
-v2/pt_speaker_1 Portuguese Male
-v2/pt_speaker_2 Portuguese Male
-v2/pt_speaker_3 Portuguese Male
-v2/pt_speaker_4 Portuguese Male
-v2/pt_speaker_5 Portuguese Male
-v2/pt_speaker_6 Portuguese Male
-v2/pt_speaker_7 Portuguese Male
-v2/pt_speaker_8 Portuguese Male
-v2/pt_speaker_9 Portuguese Male
-v2/ru_speaker_0 Russian Male
-v2/ru_speaker_1 Russian Male
-v2/ru_speaker_2 Russian Male
-v2/ru_speaker_3 Russian Male
-v2/ru_speaker_4 Russian Male
-v2/ru_speaker_5 Russian Female
-v2/ru_speaker_6 Russian Female
-v2/ru_speaker_7 Russian Male
-v2/ru_speaker_8 Russian Male
-v2/ru_speaker_9 Russian Female
-v2/es_speaker_0 Spanish Male
-v2/es_speaker_1 Spanish Male
-v2/es_speaker_2 Spanish Male
-v2/es_speaker_3 Spanish Male
-v2/es_speaker_4 Spanish Male
-v2/es_speaker_5 Spanish Male
-v2/es_speaker_6 Spanish Male
-v2/es_speaker_7 Spanish Male
-v2/es_speaker_8 Spanish Female
-v2/es_speaker_9 Spanish Female
-v2/tr_speaker_0 Turkish Male
-v2/tr_speaker_1 Turkish Male
-v2/tr_speaker_2 Turkish Male
-v2/tr_speaker_3 Turkish Male
-v2/tr_speaker_4 Turkish Female
-v2/tr_speaker_5 Turkish Female
-v2/tr_speaker_6 Turkish Male
-v2/tr_speaker_7 Turkish Male
-v2/tr_speaker_8 Turkish Male
-v2/tr_speaker_9 Turkish Male
- """
-# Dividir el mensaje en líneas
- lineas = mensaje.split("\n")
- datos_deseados = []
- for linea in lineas:
- partes = linea.split("\t")
- if len(partes) == 3:
- clave, _, genero = partes
- datos_deseados.append(f"{clave}-{genero}")
-
- return datos_deseados
-
-
-def get_edge_voice():
- completed_process = subprocess.run(['edge-tts',"-l"], capture_output=True, text=True)
- lines = completed_process.stdout.strip().split("\n")
- data = []
- current_entry = {}
- for line in lines:
- if line.startswith("Name: "):
- if current_entry:
- data.append(current_entry)
- current_entry = {"Name": line.split(": ")[1]}
- elif line.startswith("Gender: "):
- current_entry["Gender"] = line.split(": ")[1]
- if current_entry:
- data.append(current_entry)
- tts_voice = []
- for entry in data:
- name = entry["Name"]
- gender = entry["Gender"]
- formatted_entry = f'{name}-{gender}'
- tts_voice.append(formatted_entry)
- return tts_voice
-
-
-#print(set_tts_voice)
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/modules/vc/modules.py b/spaces/Eddycrack864/Applio-Inference/infer/modules/vc/modules.py
deleted file mode 100644
index 458cfbe860b23bdd8f07abc2934443e6b8b01c3a..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/modules/vc/modules.py
+++ /dev/null
@@ -1,526 +0,0 @@
-import os, sys
-import traceback
-import logging
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-logger = logging.getLogger(__name__)
-import lib.globals.globals as rvc_globals
-import numpy as np
-import soundfile as sf
-import torch
-from io import BytesIO
-from infer.lib.audio import load_audio
-from infer.lib.audio import wav2
-from infer.lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from infer.modules.vc.pipeline import Pipeline
-from infer.modules.vc.utils import *
-import time
-import scipy.io.wavfile as wavfile
-
-def note_to_hz(note_name):
- SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2}
- pitch_class, octave = note_name[:-1], int(note_name[-1])
- semitone = SEMITONES[pitch_class]
- note_number = 12 * (octave - 4) + semitone
- frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number
- return frequency
-
-class VC:
- def __init__(self, config):
- self.n_spk = None
- self.tgt_sr = None
- self.net_g = None
- self.pipeline = None
- self.cpt = None
- self.version = None
- self.if_f0 = None
- self.version = None
- self.hubert_model = None
-
- self.config = config
-
- def get_vc(self, sid, *to_return_protect):
- logger.info("Get sid: " + sid)
-
- to_return_protect0 = {
- "visible": self.if_f0 != 0,
- "value": to_return_protect[0]
- if self.if_f0 != 0 and to_return_protect
- else 0.5,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": self.if_f0 != 0,
- "value": to_return_protect[1]
- if self.if_f0 != 0 and to_return_protect
- else 0.33,
- "__type__": "update",
- }
-
- if not sid:
- if self.hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
- logger.info("Clean model cache")
- del (
- self.net_g,
- self.n_spk,
- self.vc,
- self.hubert_model,
- self.tgt_sr,
- ) # ,cpt
- self.hubert_model = (
- self.net_g
- ) = self.n_spk = self.vc = self.hubert_model = self.tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- ###楼下不这么折腾清理不干净
- self.if_f0 = self.cpt.get("f0", 1)
- self.version = self.cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *self.cpt["config"], is_half=self.config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*self.cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *self.cpt["config"], is_half=self.config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*self.cpt["config"])
- del self.net_g, self.cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return (
- {"visible": False, "__type__": "update"},
- {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- },
- {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- },
- "",
- "",
- )
- #person = f'{os.getenv("weight_root")}/{sid}'
- person = f'{sid}'
- #logger.info(f"Loading: {person}")
- logger.info(f"Loading...")
- self.cpt = torch.load(person, map_location="cpu")
- self.tgt_sr = self.cpt["config"][-1]
- self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = self.cpt.get("f0", 1)
- self.version = self.cpt.get("version", "v1")
-
- synthesizer_class = {
- ("v1", 1): SynthesizerTrnMs256NSFsid,
- ("v1", 0): SynthesizerTrnMs256NSFsid_nono,
- ("v2", 1): SynthesizerTrnMs768NSFsid,
- ("v2", 0): SynthesizerTrnMs768NSFsid_nono,
- }
-
- self.net_g = synthesizer_class.get(
- (self.version, self.if_f0), SynthesizerTrnMs256NSFsid
- )(*self.cpt["config"], is_half=self.config.is_half)
-
- del self.net_g.enc_q
-
- self.net_g.load_state_dict(self.cpt["weight"], strict=False)
- self.net_g.eval().to(self.config.device)
- if self.config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
-
- self.pipeline = Pipeline(self.tgt_sr, self.config)
- n_spk = self.cpt["config"][-3]
- index = {"value": get_index_path_from_model(sid), "__type__": "update"}
- logger.info("Select index: " + index["value"])
-
- return (
- (
- {"visible": False, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1
- )
- if to_return_protect
- else {"visible": False, "maximum": n_spk, "__type__": "update"}
- )
-
-
- def vc_single(
- self,
- sid,
- input_audio_path0,
- input_audio_path1,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- global total_time
- total_time = 0
- start_time = time.time()
- if not input_audio_path0 and not input_audio_path1:
- return "You need to upload an audio", None
-
- if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))):
- return "Audio was not properly selected or doesn't exist", None
-
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'")
- print("-------------------")
- f0_up_key = int(f0_up_key)
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"Attempting to load {input_audio_path1}....")
- audio = load_audio(file=input_audio_path1,
- sr=16000,
- DoFormant=rvc_globals.DoFormant,
- Quefrency=rvc_globals.Quefrency,
- Timbre=rvc_globals.Timbre)
-
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
-
- if self.hubert_model is None:
- self.hubert_model = load_hubert(self.config)
-
- try:
- self.if_f0 = self.cpt.get("f0", 1)
- except NameError:
- message = "Model was not properly selected"
- print(message)
- return message, None
-
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- if file_index != ""
- else file_index2
- ) # 防止小白写错,自动帮他替换掉
-
- try:
- audio_opt = self.pipeline.pipeline(
- self.hubert_model,
- self.net_g,
- sid,
- audio,
- input_audio_path1,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- self.if_f0,
- filter_radius,
- self.tgt_sr,
- resample_sr,
- rms_mix_rate,
- self.version,
- protect,
- crepe_hop_length,
- f0_autotune,
- f0_file=f0_file,
- f0_min=f0_min,
- f0_max=f0_max
- )
- except AssertionError:
- message = "Mismatching index version detected (v1 with v2, or v2 with v1)."
- print(message)
- return message, None
- except NameError:
- message = "RVC libraries are still loading. Please try again in a few seconds."
- print(message)
- return message, None
-
- if self.tgt_sr != resample_sr >= 16000:
- self.tgt_sr = resample_sr
- index_info = (
- "Index:\n%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- end_time = time.time()
- total_time = end_time - start_time
-
- output_folder = "audio-outputs"
- os.makedirs(output_folder, exist_ok=True)
- output_filename = "generated_audio_{}.wav"
- output_count = 1
- while True:
- current_output_path = os.path.join(output_folder, output_filename.format(output_count))
- if not os.path.exists(current_output_path):
- break
- output_count += 1
-
- wavfile.write(current_output_path, self.tgt_sr, audio_opt)
- print(f"Generated audio saved to: {current_output_path}")
- return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- logger.warn(info)
- return info, (None, None)
-
- def vc_single_dont_save(
- self,
- sid,
- input_audio_path0,
- input_audio_path1,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- global total_time
- total_time = 0
- start_time = time.time()
- if not input_audio_path0 and not input_audio_path1:
- return "You need to upload an audio", None
-
- if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))):
- return "Audio was not properly selected or doesn't exist", None
-
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'")
- print("-------------------")
- f0_up_key = int(f0_up_key)
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"Attempting to load {input_audio_path1}....")
- audio = load_audio(file=input_audio_path1,
- sr=16000,
- DoFormant=rvc_globals.DoFormant,
- Quefrency=rvc_globals.Quefrency,
- Timbre=rvc_globals.Timbre)
-
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
-
- if self.hubert_model is None:
- self.hubert_model = load_hubert(self.config)
-
- try:
- self.if_f0 = self.cpt.get("f0", 1)
- except NameError:
- message = "Model was not properly selected"
- print(message)
- return message, None
-
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- if file_index != ""
- else file_index2
- ) # 防止小白写错,自动帮他替换掉
-
- try:
- audio_opt = self.pipeline.pipeline(
- self.hubert_model,
- self.net_g,
- sid,
- audio,
- input_audio_path1,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- self.if_f0,
- filter_radius,
- self.tgt_sr,
- resample_sr,
- rms_mix_rate,
- self.version,
- protect,
- crepe_hop_length,
- f0_autotune,
- f0_file=f0_file,
- f0_min=f0_min,
- f0_max=f0_max
- )
- except AssertionError:
- message = "Mismatching index version detected (v1 with v2, or v2 with v1)."
- print(message)
- return message, None
- except NameError:
- message = "RVC libraries are still loading. Please try again in a few seconds."
- print(message)
- return message, None
-
- if self.tgt_sr != resample_sr >= 16000:
- self.tgt_sr = resample_sr
- index_info = (
- "Index:\n%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- end_time = time.time()
- total_time = end_time - start_time
-
- return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- logger.warn(info)
- return info, (None, None)
-
-
- def vc_multi(
- self,
- sid,
- dir_path,
- opt_root,
- paths,
- f0_up_key,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- format1,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- dir_path = (
- dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- os.makedirs(opt_root, exist_ok=True)
- try:
- if dir_path != "":
- paths = [
- os.path.join(dir_path, name) for name in os.listdir(dir_path)
- ]
- else:
- paths = [path.name for path in paths]
- except:
- traceback.print_exc()
- paths = [path.name for path in paths]
- infos = []
- for path in paths:
- info, opt = self.vc_single(
- sid,
- path,
- f0_up_key,
- None,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- )
- if "Success" in info:
- try:
- tgt_sr, audio_opt = opt
- if format1 in ["wav", "flac"]:
- sf.write(
- "%s/%s.%s"
- % (opt_root, os.path.basename(path), format1),
- audio_opt,
- tgt_sr,
- )
- else:
- path = "%s/%s.%s" % (opt_root, os.path.basename(path), format1)
- with BytesIO() as wavf:
- sf.write(
- wavf,
- audio_opt,
- tgt_sr,
- format="wav"
- )
- wavf.seek(0, 0)
- with open(path, "wb") as outf:
- wav2(wavf, outf, format1)
- except:
- info += traceback.format_exc()
- infos.append("%s->%s" % (os.path.basename(path), info))
- yield "\n".join(infos)
- yield "\n".join(infos)
- except:
- yield traceback.format_exc()
diff --git a/spaces/EronSamez/RVC_HFmeu/extract_locale.py b/spaces/EronSamez/RVC_HFmeu/extract_locale.py
deleted file mode 100644
index a4ff5ea3ddd7c612c640544099ab98a861b8fe35..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/extract_locale.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import json
-import re
-
-# Define regular expression patterns
-pattern = r"""i18n\([\s\n\t]*(["'][^"']+["'])[\s\n\t]*\)"""
-
-# Initialize the dictionary to store key-value pairs
-data = {}
-
-
-def process(fn: str):
- global data
- with open(fn, "r", encoding="utf-8") as f:
- contents = f.read()
- matches = re.findall(pattern, contents)
- for key in matches:
- key = eval(key)
- print("extract:", key)
- data[key] = key
-
-
-print("processing infer-web.py")
-process("infer-web.py")
-
-print("processing gui_v0.py")
-process("gui_v0.py")
-
-print("processing gui_v1.py")
-process("gui_v1.py")
-
-# Save as a JSON file
-with open("./i18n/en_US.json", "w", encoding="utf-8") as f:
- json.dump(data, f, ensure_ascii=False, indent=4)
- f.write("\n")
diff --git a/spaces/EuroPython2022/BayesCap/losses.py b/spaces/EuroPython2022/BayesCap/losses.py
deleted file mode 100644
index 990af85be1163124a385b06ac5ffc63a47b0cfdd..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/BayesCap/losses.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision.models as models
-from torch import Tensor
-
-class ContentLoss(nn.Module):
- """Constructs a content loss function based on the VGG19 network.
- Using high-level feature mapping layers from the latter layers will focus more on the texture content of the image.
-
- Paper reference list:
- -`Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network ` paper.
- -`ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks ` paper.
- -`Perceptual Extreme Super Resolution Network with Receptive Field Block ` paper.
-
- """
-
- def __init__(self) -> None:
- super(ContentLoss, self).__init__()
- # Load the VGG19 model trained on the ImageNet dataset.
- vgg19 = models.vgg19(pretrained=True).eval()
- # Extract the thirty-sixth layer output in the VGG19 model as the content loss.
- self.feature_extractor = nn.Sequential(*list(vgg19.features.children())[:36])
- # Freeze model parameters.
- for parameters in self.feature_extractor.parameters():
- parameters.requires_grad = False
-
- # The preprocessing method of the input data. This is the VGG model preprocessing method of the ImageNet dataset.
- self.register_buffer("mean", torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))
- self.register_buffer("std", torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))
-
- def forward(self, sr: Tensor, hr: Tensor) -> Tensor:
- # Standardized operations
- sr = sr.sub(self.mean).div(self.std)
- hr = hr.sub(self.mean).div(self.std)
-
- # Find the feature map difference between the two images
- loss = F.l1_loss(self.feature_extractor(sr), self.feature_extractor(hr))
-
- return loss
-
-
-class GenGaussLoss(nn.Module):
- def __init__(
- self, reduction='mean',
- alpha_eps = 1e-4, beta_eps=1e-4,
- resi_min = 1e-4, resi_max=1e3
- ) -> None:
- super(GenGaussLoss, self).__init__()
- self.reduction = reduction
- self.alpha_eps = alpha_eps
- self.beta_eps = beta_eps
- self.resi_min = resi_min
- self.resi_max = resi_max
-
- def forward(
- self,
- mean: Tensor, one_over_alpha: Tensor, beta: Tensor, target: Tensor
- ):
- one_over_alpha1 = one_over_alpha + self.alpha_eps
- beta1 = beta + self.beta_eps
-
- resi = torch.abs(mean - target)
- # resi = torch.pow(resi*one_over_alpha1, beta1).clamp(min=self.resi_min, max=self.resi_max)
- resi = (resi*one_over_alpha1*beta1).clamp(min=self.resi_min, max=self.resi_max)
- ## check if resi has nans
- if torch.sum(resi != resi) > 0:
- print('resi has nans!!')
- return None
-
- log_one_over_alpha = torch.log(one_over_alpha1)
- log_beta = torch.log(beta1)
- lgamma_beta = torch.lgamma(torch.pow(beta1, -1))
-
- if torch.sum(log_one_over_alpha != log_one_over_alpha) > 0:
- print('log_one_over_alpha has nan')
- if torch.sum(lgamma_beta != lgamma_beta) > 0:
- print('lgamma_beta has nan')
- if torch.sum(log_beta != log_beta) > 0:
- print('log_beta has nan')
-
- l = resi - log_one_over_alpha + lgamma_beta - log_beta
-
- if self.reduction == 'mean':
- return l.mean()
- elif self.reduction == 'sum':
- return l.sum()
- else:
- print('Reduction not supported')
- return None
-
-class TempCombLoss(nn.Module):
- def __init__(
- self, reduction='mean',
- alpha_eps = 1e-4, beta_eps=1e-4,
- resi_min = 1e-4, resi_max=1e3
- ) -> None:
- super(TempCombLoss, self).__init__()
- self.reduction = reduction
- self.alpha_eps = alpha_eps
- self.beta_eps = beta_eps
- self.resi_min = resi_min
- self.resi_max = resi_max
-
- self.L_GenGauss = GenGaussLoss(
- reduction=self.reduction,
- alpha_eps=self.alpha_eps, beta_eps=self.beta_eps,
- resi_min=self.resi_min, resi_max=self.resi_max
- )
- self.L_l1 = nn.L1Loss(reduction=self.reduction)
-
- def forward(
- self,
- mean: Tensor, one_over_alpha: Tensor, beta: Tensor, target: Tensor,
- T1: float, T2: float
- ):
- l1 = self.L_l1(mean, target)
- l2 = self.L_GenGauss(mean, one_over_alpha, beta, target)
- l = T1*l1 + T2*l2
-
- return l
-
-
-# x1 = torch.randn(4,3,32,32)
-# x2 = torch.rand(4,3,32,32)
-# x3 = torch.rand(4,3,32,32)
-# x4 = torch.randn(4,3,32,32)
-
-# L = GenGaussLoss(alpha_eps=1e-4, beta_eps=1e-4, resi_min=1e-4, resi_max=1e3)
-# L2 = TempCombLoss(alpha_eps=1e-4, beta_eps=1e-4, resi_min=1e-4, resi_max=1e3)
-# print(L(x1, x2, x3, x4), L2(x1, x2, x3, x4, 1e0, 1e-2))
\ No newline at end of file
diff --git a/spaces/Falah/stablediffusionDB/app.py b/spaces/Falah/stablediffusionDB/app.py
deleted file mode 100644
index faa76da3aa856ae351d5c41a3ebae5a22134884c..0000000000000000000000000000000000000000
--- a/spaces/Falah/stablediffusionDB/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import numpy as np
-import gradio as gr
-from datasets import load_dataset
-
-def generate_random_data():
- # Load the dataset with the `large_random_1k` subset
- dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
- # All data are stored in the `train` split
- my_1k_data = dataset['train']
-
- random_i = np.random.choice(range(my_1k_data.num_rows))
-
- prompt = my_1k_data['prompt'][random_i]
- image = my_1k_data['image'][random_i]
- seed = my_1k_data['seed'][random_i]
- step = my_1k_data['step'][random_i]
- cfg = my_1k_data['cfg'][random_i]
- sampler = my_1k_data['sampler'][random_i]
-
- return prompt, image, seed, step, cfg, sampler
-
-def random_data():
- prompt, image, seed, step, cfg, sampler = generate_random_data()
-
- data = {
- 'Prompt': prompt,
- 'Seed': seed,
- 'Step': step,
- 'CFG': cfg,
- 'Sampler': sampler
- }
-
- with open("random_data.txt", "w") as file:
- for key, value in data.items():
- file.write(f"{key}: {value}\n")
-
- return prompt, image, seed, step, cfg, sampler
-
-iface = gr.Interface(fn=random_data, inputs=None, outputs=[
- gr.outputs.Textbox(label="Prompt"),
- gr.outputs.Image(label="Image", type="pil"),
- gr.outputs.Textbox(label="Seed"),
- gr.outputs.Textbox(label="Step"),
- gr.outputs.Textbox(label="CFG"),
- gr.outputs.Textbox(label="Sampler")
-], title="Stable Diffusion DB", description="By Falah.G.S AI Developer")
-
-iface.launch(debug=True)
diff --git a/spaces/Flux9665/IMS-Toucan/InferenceInterfaces/__init__.py b/spaces/Flux9665/IMS-Toucan/InferenceInterfaces/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/Img_to_H5.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/Img_to_H5.py
deleted file mode 100644
index fde0a149ddd3caad82226f4ed24a37916008b8db..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/Img_to_H5.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import glob
-
-import h5py
-from PIL import Image
-from torchvision.transforms import RandomCrop
-from torchvision.transforms.functional import to_tensor
-from tqdm import tqdm
-
-from Dataloader import ImageAugment
-
-patch_size = 128
-shrink_size = 2
-noise_level = 1
-patches_per_img = 20
-images = glob.glob("dataset/train/*")
-
-database = h5py.File("train_images.hdf5", "w")
-
-dat_group = database.create_group("shrink_2_noise_level_1_downsample_random_rgb")
-# del database['shrink_2_noise_level_1_downsample_random']
-storage_lr = dat_group.create_dataset(
- "train_lr",
- shape=(
- patches_per_img * len(images),
- 3,
- patch_size // shrink_size,
- patch_size // shrink_size,
- ),
- dtype="float32",
- # compression='lzf',
-)
-storage_hr = dat_group.create_dataset(
- "train_hr",
- shape=(patches_per_img * len(images), 3, patch_size, patch_size),
- # compression='lzf',
- dtype="float32",
-)
-
-random_cropper = RandomCrop(size=patch_size)
-img_augmenter = ImageAugment(shrink_size, noise_level, down_sample_method=None)
-
-
-def get_img_patches(img_pil):
- img_patch = random_cropper(img_pil)
- lr_hr_patches = img_augmenter.process(img_patch)
- return lr_hr_patches
-
-
-counter = 0
-for img in tqdm(images):
- img_pil = Image.open(img).convert("RGB")
- for i in range(patches_per_img):
- patch = get_img_patches(img_pil)
- storage_lr[counter] = to_tensor(patch[0].convert("RGB")).numpy()
- storage_hr[counter] = to_tensor(patch[1].convert("RGB")).numpy()
- counter += 1
-database.close()
diff --git a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/bias_act.h b/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/bias_act.h
deleted file mode 100644
index a32187e1fb7e3bae509d4eceaf900866866875a4..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/bias_act.h
+++ /dev/null
@@ -1,38 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-//------------------------------------------------------------------------
-// CUDA kernel parameters.
-
-struct bias_act_kernel_params
-{
- const void* x; // [sizeX]
- const void* b; // [sizeB] or NULL
- const void* xref; // [sizeX] or NULL
- const void* yref; // [sizeX] or NULL
- const void* dy; // [sizeX] or NULL
- void* y; // [sizeX]
-
- int grad;
- int act;
- float alpha;
- float gain;
- float clamp;
-
- int sizeX;
- int sizeB;
- int stepB;
- int loopX;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel selection.
-
-template void* choose_bias_act_kernel(const bias_act_kernel_params& p);
-
-//------------------------------------------------------------------------
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/anchor/point_generator.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/anchor/point_generator.py
deleted file mode 100644
index e6fbd988c317992c092c68c827dc4c53223b4a4a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/anchor/point_generator.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import torch
-
-from .builder import ANCHOR_GENERATORS
-
-
-@ANCHOR_GENERATORS.register_module()
-class PointGenerator(object):
-
- def _meshgrid(self, x, y, row_major=True):
- xx = x.repeat(len(y))
- yy = y.view(-1, 1).repeat(1, len(x)).view(-1)
- if row_major:
- return xx, yy
- else:
- return yy, xx
-
- def grid_points(self, featmap_size, stride=16, device='cuda'):
- feat_h, feat_w = featmap_size
- shift_x = torch.arange(0., feat_w, device=device) * stride
- shift_y = torch.arange(0., feat_h, device=device) * stride
- shift_xx, shift_yy = self._meshgrid(shift_x, shift_y)
- stride = shift_x.new_full((shift_xx.shape[0], ), stride)
- shifts = torch.stack([shift_xx, shift_yy, stride], dim=-1)
- all_points = shifts.to(device)
- return all_points
-
- def valid_flags(self, featmap_size, valid_size, device='cuda'):
- feat_h, feat_w = featmap_size
- valid_h, valid_w = valid_size
- assert valid_h <= feat_h and valid_w <= feat_w
- valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device)
- valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device)
- valid_x[:valid_w] = 1
- valid_y[:valid_h] = 1
- valid_xx, valid_yy = self._meshgrid(valid_x, valid_y)
- valid = valid_xx & valid_yy
- return valid
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes.py
deleted file mode 100644
index 2c73b3839c8c1bc859eb3b8864256a00cfd022fe..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/ocrnet_hr18.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/CONTRIBUTING.md b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/CONTRIBUTING.md
deleted file mode 100644
index 55b99140204d785d572ada9761dd77f302ae31c6..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/CONTRIBUTING.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Contributing to Audiocraft
-
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Pull Requests
-
-Audiocraft is the implementation of a research paper.
-Therefore, we do not plan on accepting many pull requests for new features.
-We certainly welcome them for bug fixes.
-
-1. Fork the repo and create your branch from `main`.
-2. If you've added code that should be tested, add tests.
-3. If you've changed APIs, update the documentation.
-4. Ensure the test suite passes.
-5. Make sure your code lints.
-6. If you haven't already, complete the Contributor License Agreement ("CLA").
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Meta's open source projects.
-
-Complete your CLA here:
-
-## Issues
-We use GitHub issues to track public bugs. Please ensure your description is
-clear and has sufficient instructions to be able to reproduce the issue.
-
-Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## License
-By contributing to encodec, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/visualize/render_mesh.py b/spaces/Grezz/generate_human_motion/VQ-Trans/visualize/render_mesh.py
deleted file mode 100644
index d44d04f551ccb4f1ffc9efb4cb1a44c407ede836..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/VQ-Trans/visualize/render_mesh.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import argparse
-import os
-from visualize import vis_utils
-import shutil
-from tqdm import tqdm
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--input_path", type=str, required=True, help='stick figure mp4 file to be rendered.')
- parser.add_argument("--cuda", type=bool, default=True, help='')
- parser.add_argument("--device", type=int, default=0, help='')
- params = parser.parse_args()
-
- assert params.input_path.endswith('.mp4')
- parsed_name = os.path.basename(params.input_path).replace('.mp4', '').replace('sample', '').replace('rep', '')
- sample_i, rep_i = [int(e) for e in parsed_name.split('_')]
- npy_path = os.path.join(os.path.dirname(params.input_path), 'results.npy')
- out_npy_path = params.input_path.replace('.mp4', '_smpl_params.npy')
- assert os.path.exists(npy_path)
- results_dir = params.input_path.replace('.mp4', '_obj')
- if os.path.exists(results_dir):
- shutil.rmtree(results_dir)
- os.makedirs(results_dir)
-
- npy2obj = vis_utils.npy2obj(npy_path, sample_i, rep_i,
- device=params.device, cuda=params.cuda)
-
- print('Saving obj files to [{}]'.format(os.path.abspath(results_dir)))
- for frame_i in tqdm(range(npy2obj.real_num_frames)):
- npy2obj.save_obj(os.path.join(results_dir, 'frame{:03d}.obj'.format(frame_i)), frame_i)
-
- print('Saving SMPL params to [{}]'.format(os.path.abspath(out_npy_path)))
- npy2obj.save_npy(out_npy_path)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lstm_cell_with_zoneout.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lstm_cell_with_zoneout.py
deleted file mode 100644
index f04e5db255c62bbe0faebbc641f579f92be5580c..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lstm_cell_with_zoneout.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch.nn as nn
-
-
-class LSTMCellWithZoneOut(nn.Module):
- """
- Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
- https://arxiv.org/abs/1606.01305
- """
-
- def __init__(self, prob: float, input_size: int, hidden_size: int,
- bias: bool = True):
- super(LSTMCellWithZoneOut, self).__init__()
- self.lstm_cell = nn.LSTMCell(input_size, hidden_size, bias=bias)
- self.prob = prob
- if prob > 1.0 or prob < 0.0:
- raise ValueError("zoneout probability must be in the range from "
- "0.0 to 1.0.")
-
- def zoneout(self, h, next_h, prob):
- if isinstance(h, tuple):
- return tuple(
- [self.zoneout(h[i], next_h[i], prob) for i in range(len(h))]
- )
-
- if self.training:
- mask = h.new_zeros(*h.size()).bernoulli_(prob)
- return mask * h + (1 - mask) * next_h
-
- return prob * h + (1 - prob) * next_h
-
- def forward(self, x, h):
- return self.zoneout(h, self.lstm_cell(x, h), self.prob)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py
deleted file mode 100644
index d95da59c2471bfa858fd627605196d7f41f9ec12..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.modules import TransformerSentenceEncoderLayer
-from fairseq.modules.sparse_multihead_attention import SparseMultiheadAttention
-
-
-class SparseTransformerSentenceEncoderLayer(TransformerSentenceEncoderLayer):
- """
- Implements a Sprase Transformer Encoder Layer (see SparseMultiheadAttention)
- """
-
- def __init__(
- self,
- embedding_dim: int = 768,
- ffn_embedding_dim: int = 3072,
- num_attention_heads: int = 8,
- dropout: float = 0.1,
- attention_dropout: float = 0.1,
- activation_dropout: float = 0.1,
- activation_fn: str = "relu",
- export: bool = False,
- is_bidirectional: bool = True,
- stride: int = 32,
- expressivity: int = 8,
- ) -> None:
-
- super().__init__(
- embedding_dim,
- ffn_embedding_dim,
- num_attention_heads,
- dropout,
- attention_dropout,
- activation_dropout,
- activation_fn,
- export,
- )
-
- self.self_attn = SparseMultiheadAttention(
- self.embedding_dim,
- num_attention_heads,
- dropout=attention_dropout,
- add_bias_kv=False,
- add_zero_attn=False,
- self_attention=True,
- is_bidirectional=is_bidirectional,
- stride=stride,
- expressivity=expressivity,
- )
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/spm_decode.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/spm_decode.py
deleted file mode 100644
index 1c18b1d2a7d7628b7aeb6fdb6c4ab5a096e9edf8..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/spm_decode.py
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import argparse
-
-import sentencepiece as spm
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--model", required=True, help="sentencepiece model to use for decoding"
- )
- parser.add_argument("--input", required=True, help="input file to decode")
- parser.add_argument("--input_format", choices=["piece", "id"], default="piece")
- args = parser.parse_args()
-
- sp = spm.SentencePieceProcessor()
- sp.Load(args.model)
-
- if args.input_format == "piece":
-
- def decode(l):
- return "".join(sp.DecodePieces(l))
-
- elif args.input_format == "id":
-
- def decode(l):
- return "".join(sp.DecodeIds(l))
-
- else:
- raise NotImplementedError
-
- def tok2int(tok):
- # remap reference-side (represented as <>) to 0
- return int(tok) if tok != "<>" else 0
-
- with open(args.input, "r", encoding="utf-8") as h:
- for line in h:
- if args.input_format == "id":
- print(decode(list(map(tok2int, line.rstrip().split()))))
- elif args.input_format == "piece":
- print(decode(line.rstrip().split()))
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/hifi/env.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/hifi/env.py
deleted file mode 100644
index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/hifi/env.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import os
-import shutil
-
-
-class AttrDict(dict):
- def __init__(self, *args, **kwargs):
- super(AttrDict, self).__init__(*args, **kwargs)
- self.__dict__ = self
-
-
-def build_env(config, config_name, path):
- t_path = os.path.join(path, config_name)
- if config != t_path:
- os.makedirs(path, exist_ok=True)
- shutil.copyfile(config, os.path.join(path, config_name))
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/models.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/models.py
deleted file mode 100644
index a77596153fa2e7e6fdd52ee0028a0c8ce02050b4..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/models.py
+++ /dev/null
@@ -1,403 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import modules
-import commons
-import attentions
-import monotonic_align
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(
- in_channels, filter_channels, kernel_size, padding=kernel_size // 2
- )
- self.norm_1 = attentions.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(
- filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
- )
- self.norm_2 = attentions.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- def forward(self, x, x_mask):
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(
- self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- filter_channels_dp,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=None,
- block_length=None,
- mean_only=False,
- prenet=False,
- gin_channels=0,
- ):
-
- super().__init__()
-
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.filter_channels_dp = filter_channels_dp
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.mean_only = mean_only
- self.prenet = prenet
- self.gin_channels = gin_channels
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
-
- if prenet:
- self.pre = modules.ConvReluNorm(
- hidden_channels,
- hidden_channels,
- hidden_channels,
- kernel_size=5,
- n_layers=3,
- p_dropout=0.5,
- )
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- )
-
- self.proj_m = nn.Conv1d(hidden_channels, out_channels, 1)
- if not mean_only:
- self.proj_s = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj_w = DurationPredictor(
- hidden_channels + gin_channels, filter_channels_dp, kernel_size, p_dropout
- )
-
- def forward(self, x, x_lengths, g=None):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
-
- if self.prenet:
- x = self.pre(x, x_mask)
- x = self.encoder(x, x_mask)
-
- if g is not None:
- g_exp = g.expand(-1, -1, x.size(-1))
- x_dp = torch.cat([torch.detach(x), g_exp], 1)
- else:
- x_dp = torch.detach(x)
-
- x_m = self.proj_m(x) * x_mask
- if not self.mean_only:
- x_logs = self.proj_s(x) * x_mask
- else:
- x_logs = torch.zeros_like(x_m)
-
- logw = self.proj_w(x_dp, x_mask)
- return x_m, x_logs, logw, x_mask
-
-
-class FlowSpecDecoder(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_blocks,
- n_layers,
- p_dropout=0.0,
- n_split=4,
- n_sqz=2,
- sigmoid_scale=False,
- gin_channels=0,
- ):
- super().__init__()
-
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_blocks = n_blocks
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- self.n_split = n_split
- self.n_sqz = n_sqz
- self.sigmoid_scale = sigmoid_scale
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for b in range(n_blocks):
- self.flows.append(modules.ActNorm(channels=in_channels * n_sqz))
- self.flows.append(
- modules.InvConvNear(channels=in_channels * n_sqz, n_split=n_split)
- )
- self.flows.append(
- attentions.CouplingBlock(
- in_channels * n_sqz,
- hidden_channels,
- kernel_size=kernel_size,
- dilation_rate=dilation_rate,
- n_layers=n_layers,
- gin_channels=gin_channels,
- p_dropout=p_dropout,
- sigmoid_scale=sigmoid_scale,
- )
- )
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- flows = self.flows
- logdet_tot = 0
- else:
- flows = reversed(self.flows)
- logdet_tot = None
-
- if self.n_sqz > 1:
- x, x_mask = commons.squeeze(x, x_mask, self.n_sqz)
- for f in flows:
- if not reverse:
- x, logdet = f(x, x_mask, g=g, reverse=reverse)
- logdet_tot += logdet
- else:
- x, logdet = f(x, x_mask, g=g, reverse=reverse)
- if self.n_sqz > 1:
- x, x_mask = commons.unsqueeze(x, x_mask, self.n_sqz)
- return x, logdet_tot
-
- def store_inverse(self):
- for f in self.flows:
- f.store_inverse()
-
-
-class FlowGenerator(nn.Module):
- def __init__(
- self,
- n_vocab,
- hidden_channels,
- filter_channels,
- filter_channels_dp,
- out_channels,
- kernel_size=3,
- n_heads=2,
- n_layers_enc=6,
- p_dropout=0.0,
- n_blocks_dec=12,
- kernel_size_dec=5,
- dilation_rate=5,
- n_block_layers=4,
- p_dropout_dec=0.0,
- n_speakers=0,
- gin_channels=0,
- n_split=4,
- n_sqz=1,
- sigmoid_scale=False,
- window_size=None,
- block_length=None,
- mean_only=False,
- hidden_channels_enc=None,
- hidden_channels_dec=None,
- prenet=False,
- **kwargs
- ):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.filter_channels_dp = filter_channels_dp
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_heads = n_heads
- self.n_layers_enc = n_layers_enc
- self.p_dropout = p_dropout
- self.n_blocks_dec = n_blocks_dec
- self.kernel_size_dec = kernel_size_dec
- self.dilation_rate = dilation_rate
- self.n_block_layers = n_block_layers
- self.p_dropout_dec = p_dropout_dec
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
- self.n_split = n_split
- self.n_sqz = n_sqz
- self.sigmoid_scale = sigmoid_scale
- self.window_size = window_size
- self.block_length = block_length
- self.mean_only = mean_only
- self.hidden_channels_enc = hidden_channels_enc
- self.hidden_channels_dec = hidden_channels_dec
- self.prenet = prenet
-
- self.encoder = TextEncoder(
- n_vocab,
- out_channels,
- hidden_channels_enc or hidden_channels,
- filter_channels,
- filter_channels_dp,
- n_heads,
- n_layers_enc,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- mean_only=mean_only,
- prenet=prenet,
- gin_channels=gin_channels,
- )
-
- self.decoder = FlowSpecDecoder(
- out_channels,
- hidden_channels_dec or hidden_channels,
- kernel_size_dec,
- dilation_rate,
- n_blocks_dec,
- n_block_layers,
- p_dropout=p_dropout_dec,
- n_split=n_split,
- n_sqz=n_sqz,
- sigmoid_scale=sigmoid_scale,
- gin_channels=gin_channels,
- )
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
- nn.init.uniform_(self.emb_g.weight, -0.1, 0.1)
-
- def forward(
- self,
- x,
- x_lengths,
- y=None,
- y_lengths=None,
- g=None,
- gen=False,
- noise_scale=1.0,
- length_scale=1.0,
- ):
- if g is not None:
- g = F.normalize(self.emb_g(g)).unsqueeze(-1) # [b, h]
- x_m, x_logs, logw, x_mask = self.encoder(x, x_lengths, g=g)
-
- if gen:
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_max_length = None
- else:
- y_max_length = y.size(2)
- y, y_lengths, y_max_length = self.preprocess(y, y_lengths, y_max_length)
- z_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, y_max_length), 1).to(
- x_mask.dtype
- )
- attn_mask = torch.unsqueeze(x_mask, -1) * torch.unsqueeze(z_mask, 2)
-
- if gen:
- attn = commons.generate_path(
- w_ceil.squeeze(1), attn_mask.squeeze(1)
- ).unsqueeze(1)
- z_m = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_m.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- z_logs = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_logs.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- logw_ = torch.log(1e-8 + torch.sum(attn, -1)) * x_mask
-
- z = (z_m + torch.exp(z_logs) * torch.randn_like(z_m) * noise_scale) * z_mask
- y, logdet = self.decoder(z, z_mask, g=g, reverse=True)
- return (
- (y, z_m, z_logs, logdet, z_mask),
- (x_m, x_logs, x_mask),
- (attn, logw, logw_),
- )
- else:
- z, logdet = self.decoder(y, z_mask, g=g, reverse=False)
- with torch.no_grad():
- x_s_sq_r = torch.exp(-2 * x_logs)
- logp1 = torch.sum(-0.5 * math.log(2 * math.pi) - x_logs, [1]).unsqueeze(
- -1
- ) # [b, t, 1]
- logp2 = torch.matmul(
- x_s_sq_r.transpose(1, 2), -0.5 * (z ** 2)
- ) # [b, t, d] x [b, d, t'] = [b, t, t']
- logp3 = torch.matmul(
- (x_m * x_s_sq_r).transpose(1, 2), z
- ) # [b, t, d] x [b, d, t'] = [b, t, t']
- logp4 = torch.sum(-0.5 * (x_m ** 2) * x_s_sq_r, [1]).unsqueeze(
- -1
- ) # [b, t, 1]
- logp = logp1 + logp2 + logp3 + logp4 # [b, t, t']
-
- attn = (
- monotonic_align.maximum_path(logp, attn_mask.squeeze(1))
- .unsqueeze(1)
- .detach()
- )
- z_m = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_m.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- z_logs = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_logs.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- logw_ = torch.log(1e-8 + torch.sum(attn, -1)) * x_mask
- return (
- (z, z_m, z_logs, logdet, z_mask),
- (x_m, x_logs, x_mask),
- (attn, logw, logw_),
- )
-
- def preprocess(self, y, y_lengths, y_max_length):
- if y_max_length is not None:
- y_max_length = (y_max_length // self.n_sqz) * self.n_sqz
- y = y[:, :, :y_max_length]
- y_lengths = (y_lengths // self.n_sqz) * self.n_sqz
- return y, y_lengths, y_max_length
-
- def store_inverse(self):
- self.decoder.store_inverse()
diff --git a/spaces/Hina4867/bingo/src/components/learn-more.tsx b/spaces/Hina4867/bingo/src/components/learn-more.tsx
deleted file mode 100644
index a64459ee7900a612292e117a6bda96ee9260990f..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/components/learn-more.tsx
+++ /dev/null
@@ -1,39 +0,0 @@
-import React from 'react'
-import { SourceAttribution } from '@/lib/bots/bing/types'
-
-export interface LearnMoreProps {
- sourceAttributions?: SourceAttribution[]
-}
-
-export function LearnMore({ sourceAttributions }: LearnMoreProps) {
- if (!sourceAttributions?.length) {
- return null
- }
-
- return (
-
- )
-}
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/model.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/model.py
deleted file mode 100644
index bb205b910daaecd55effd1e77e77d0b43952624f..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/model.py
+++ /dev/null
@@ -1,594 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-RoBERTa: A Robustly Optimized BERT Pretraining Approach.
-"""
-
-import logging
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.models.transformer import DEFAULT_MIN_PARAMS_TO_WRAP, TransformerEncoder
-from fairseq.modules import LayerNorm
-from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_
-from fairseq.modules.transformer_sentence_encoder import init_bert_params
-from fairseq.utils import safe_getattr, safe_hasattr
-
-from .hub_interface import RobertaHubInterface
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("roberta")
-class RobertaModel(FairseqEncoderModel):
- @classmethod
- def hub_models(cls):
- return {
- "roberta.base": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz",
- "roberta.large": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz",
- "roberta.large.mnli": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.mnli.tar.gz",
- "roberta.large.wsc": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.wsc.tar.gz",
- }
-
- def __init__(self, args, encoder):
- super().__init__(encoder)
- self.args = args
-
- # We follow BERT's random weight initialization
- self.apply(init_bert_params)
-
- self.classification_heads = nn.ModuleDict()
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--encoder-layers", type=int, metavar="L", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="H",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="F",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="A",
- help="num encoder attention heads",
- )
- parser.add_argument(
- "--activation-fn",
- choices=utils.get_available_activation_fns(),
- help="activation function to use",
- )
- parser.add_argument(
- "--pooler-activation-fn",
- choices=utils.get_available_activation_fns(),
- help="activation function to use for pooler layer",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
- parser.add_argument(
- "--layernorm-embedding",
- action="store_true",
- help="add layernorm to embedding",
- )
- parser.add_argument(
- "--dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--activation-dropout",
- type=float,
- metavar="D",
- help="dropout probability after activation in FFN",
- )
- parser.add_argument(
- "--pooler-dropout",
- type=float,
- metavar="D",
- help="dropout probability in the masked_lm pooler layers",
- )
- parser.add_argument(
- "--max-positions", type=int, help="number of positional embeddings to learn"
- )
- parser.add_argument(
- "--load-checkpoint-heads",
- action="store_true",
- help="(re-)register and load heads when loading checkpoints",
- )
- parser.add_argument(
- "--untie-weights-roberta",
- action="store_true",
- help="Untie weights between embeddings and classifiers in RoBERTa",
- )
- # args for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019)
- parser.add_argument(
- "--encoder-layerdrop",
- type=float,
- metavar="D",
- default=0,
- help="LayerDrop probability for encoder",
- )
- parser.add_argument(
- "--encoder-layers-to-keep",
- default=None,
- help="which layers to *keep* when pruning as a comma-separated list",
- )
- # args for Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020)
- parser.add_argument(
- "--quant-noise-pq",
- type=float,
- metavar="D",
- default=0,
- help="iterative PQ quantization noise at training time",
- )
- parser.add_argument(
- "--quant-noise-pq-block-size",
- type=int,
- metavar="D",
- default=8,
- help="block size of quantization noise at training time",
- )
- parser.add_argument(
- "--quant-noise-scalar",
- type=float,
- metavar="D",
- default=0,
- help="scalar quantization noise and scalar quantization at training time",
- )
- # args for "Better Fine-Tuning by Reducing Representational Collapse" (Aghajanyan et al. 2020)
- parser.add_argument(
- "--spectral-norm-classification-head",
- action="store_true",
- default=False,
- help="Apply spectral normalization on the classification head",
- )
- # args for Fully Sharded Data Parallel (FSDP) training
- parser.add_argument(
- "--min-params-to-wrap",
- type=int,
- metavar="D",
- default=DEFAULT_MIN_PARAMS_TO_WRAP,
- help=(
- "minimum number of params for a layer to be wrapped with FSDP() when "
- "training with --ddp-backend=fully_sharded. Smaller values will "
- "improve memory efficiency, but may make torch.distributed "
- "communication less efficient due to smaller input sizes. This option "
- "is set to 0 (i.e., always wrap) when --checkpoint-activations or "
- "--offload-activations are passed."
- )
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- from omegaconf import OmegaConf
-
- if OmegaConf.is_config(args):
- OmegaConf.set_struct(args, False)
-
- # make sure all arguments are present
- base_architecture(args)
-
- if not safe_hasattr(args, "max_positions"):
- if not safe_hasattr(args, "tokens_per_sample"):
- args.tokens_per_sample = task.max_positions()
- args.max_positions = args.tokens_per_sample
-
- encoder = RobertaEncoder(args, task.source_dictionary)
-
- if OmegaConf.is_config(args):
- OmegaConf.set_struct(args, True)
-
- return cls(args, encoder)
-
- def forward(
- self,
- src_tokens,
- features_only=False,
- return_all_hiddens=False,
- classification_head_name=None,
- **kwargs,
- ):
- if classification_head_name is not None:
- features_only = True
-
- x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs)
-
- if classification_head_name is not None:
- x = self.classification_heads[classification_head_name](x)
- return x, extra
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- """Get normalized probabilities (or log probs) from a net's output."""
- logits = net_output[0].float()
- if log_probs:
- return F.log_softmax(logits, dim=-1)
- else:
- return F.softmax(logits, dim=-1)
-
- def register_classification_head(
- self, name, num_classes=None, inner_dim=None, **kwargs
- ):
- """Register a classification head."""
- if name in self.classification_heads:
- prev_num_classes = self.classification_heads[name].out_proj.out_features
- prev_inner_dim = self.classification_heads[name].dense.out_features
- if num_classes != prev_num_classes or inner_dim != prev_inner_dim:
- logger.warning(
- 're-registering head "{}" with num_classes {} (prev: {}) '
- "and inner_dim {} (prev: {})".format(
- name, num_classes, prev_num_classes, inner_dim, prev_inner_dim
- )
- )
- self.classification_heads[name] = RobertaClassificationHead(
- input_dim=self.args.encoder_embed_dim,
- inner_dim=inner_dim or self.args.encoder_embed_dim,
- num_classes=num_classes,
- activation_fn=self.args.pooler_activation_fn,
- pooler_dropout=self.args.pooler_dropout,
- q_noise=self.args.quant_noise_pq,
- qn_block_size=self.args.quant_noise_pq_block_size,
- do_spectral_norm=self.args.spectral_norm_classification_head,
- )
-
- @property
- def supported_targets(self):
- return {"self"}
-
- @classmethod
- def from_pretrained(
- cls,
- model_name_or_path,
- checkpoint_file="model.pt",
- data_name_or_path=".",
- bpe="gpt2",
- **kwargs,
- ):
- from fairseq import hub_utils
-
- x = hub_utils.from_pretrained(
- model_name_or_path,
- checkpoint_file,
- data_name_or_path,
- archive_map=cls.hub_models(),
- bpe=bpe,
- load_checkpoint_heads=True,
- **kwargs,
- )
-
- logger.info(x["args"])
- return RobertaHubInterface(x["args"], x["task"], x["models"][0])
-
- def upgrade_state_dict_named(self, state_dict, name):
- prefix = name + "." if name != "" else ""
-
- # rename decoder -> encoder before upgrading children modules
- for k in list(state_dict.keys()):
- if k.startswith(prefix + "decoder"):
- new_k = prefix + "encoder" + k[len(prefix + "decoder") :]
- state_dict[new_k] = state_dict[k]
- del state_dict[k]
-
- # rename emb_layer_norm -> layernorm_embedding
- for k in list(state_dict.keys()):
- if ".emb_layer_norm." in k:
- new_k = k.replace(".emb_layer_norm.", ".layernorm_embedding.")
- state_dict[new_k] = state_dict[k]
- del state_dict[k]
-
- # upgrade children modules
- super().upgrade_state_dict_named(state_dict, name)
-
- # Handle new classification heads present in the state dict.
- current_head_names = (
- []
- if not hasattr(self, "classification_heads")
- else self.classification_heads.keys()
- )
- keys_to_delete = []
- for k in state_dict.keys():
- if not k.startswith(prefix + "classification_heads."):
- continue
-
- head_name = k[len(prefix + "classification_heads.") :].split(".")[0]
- num_classes = state_dict[
- prefix + "classification_heads." + head_name + ".out_proj.weight"
- ].size(0)
- inner_dim = state_dict[
- prefix + "classification_heads." + head_name + ".dense.weight"
- ].size(0)
-
- if getattr(self.args, "load_checkpoint_heads", False):
- if head_name not in current_head_names:
- self.register_classification_head(head_name, num_classes, inner_dim)
- else:
- if head_name not in current_head_names:
- logger.warning(
- "deleting classification head ({}) from checkpoint "
- "not present in current model: {}".format(head_name, k)
- )
- keys_to_delete.append(k)
- elif (
- num_classes
- != self.classification_heads[head_name].out_proj.out_features
- or inner_dim
- != self.classification_heads[head_name].dense.out_features
- ):
- logger.warning(
- "deleting classification head ({}) from checkpoint "
- "with different dimensions than current model: {}".format(
- head_name, k
- )
- )
- keys_to_delete.append(k)
- for k in keys_to_delete:
- del state_dict[k]
-
- # Copy any newly-added classification heads into the state dict
- # with their current weights.
- if hasattr(self, "classification_heads"):
- cur_state = self.classification_heads.state_dict()
- for k, v in cur_state.items():
- if prefix + "classification_heads." + k not in state_dict:
- logger.info("Overwriting " + prefix + "classification_heads." + k)
- state_dict[prefix + "classification_heads." + k] = v
-
-
-class RobertaLMHead(nn.Module):
- """Head for masked language modeling."""
-
- def __init__(self, embed_dim, output_dim, activation_fn, weight=None):
- super().__init__()
- self.dense = nn.Linear(embed_dim, embed_dim)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.layer_norm = LayerNorm(embed_dim)
-
- if weight is None:
- weight = nn.Linear(embed_dim, output_dim, bias=False).weight
- self.weight = weight
- self.bias = nn.Parameter(torch.zeros(output_dim))
-
- def forward(self, features, masked_tokens=None, **kwargs):
- # Only project the masked tokens while training,
- # saves both memory and computation
- if masked_tokens is not None:
- features = features[masked_tokens, :]
-
- x = self.dense(features)
- x = self.activation_fn(x)
- x = self.layer_norm(x)
- # project back to size of vocabulary with bias
- x = F.linear(x, self.weight) + self.bias
- return x
-
-
-class RobertaClassificationHead(nn.Module):
- """Head for sentence-level classification tasks."""
-
- def __init__(
- self,
- input_dim,
- inner_dim,
- num_classes,
- activation_fn,
- pooler_dropout,
- q_noise=0,
- qn_block_size=8,
- do_spectral_norm=False,
- ):
- super().__init__()
- self.dense = nn.Linear(input_dim, inner_dim)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.dropout = nn.Dropout(p=pooler_dropout)
- self.out_proj = apply_quant_noise_(
- nn.Linear(inner_dim, num_classes), q_noise, qn_block_size
- )
- if do_spectral_norm:
- if q_noise != 0:
- raise NotImplementedError(
- "Attempting to use Spectral Normalization with Quant Noise. This is not officially supported"
- )
- self.out_proj = torch.nn.utils.spectral_norm(self.out_proj)
-
- def forward(self, features, **kwargs):
- x = features[:, 0, :] # take token (equiv. to [CLS])
- x = self.dropout(x)
- x = self.dense(x)
- x = self.activation_fn(x)
- x = self.dropout(x)
- x = self.out_proj(x)
- return x
-
-
-class RobertaEncoder(FairseqEncoder):
- """RoBERTa encoder."""
-
- def __init__(self, args, dictionary):
- super().__init__(dictionary)
-
- # set any missing default values
- base_architecture(args)
- self.args = args
-
- if args.encoder_layers_to_keep:
- args.encoder_layers = len(args.encoder_layers_to_keep.split(","))
-
- embed_tokens = self.build_embedding(
- len(dictionary), args.encoder_embed_dim, dictionary.pad()
- )
-
- self.sentence_encoder = self.build_encoder(args, dictionary, embed_tokens)
-
- self.lm_head = self.build_lm_head(
- embed_dim=args.encoder_embed_dim,
- output_dim=len(dictionary),
- activation_fn=args.activation_fn,
- weight=(
- self.sentence_encoder.embed_tokens.weight
- if not args.untie_weights_roberta
- else None
- ),
- )
-
- def build_embedding(self, vocab_size, embedding_dim, padding_idx):
- return nn.Embedding(vocab_size, embedding_dim, padding_idx)
-
- def build_encoder(self, args, dictionary, embed_tokens):
- encoder = TransformerEncoder(args, dictionary, embed_tokens)
- encoder.apply(init_bert_params)
- return encoder
-
- def build_lm_head(self, embed_dim, output_dim, activation_fn, weight):
- return RobertaLMHead(embed_dim, output_dim, activation_fn, weight)
-
- def forward(
- self,
- src_tokens,
- features_only=False,
- return_all_hiddens=False,
- masked_tokens=None,
- **unused,
- ):
- """
- Args:
- src_tokens (LongTensor): input tokens of shape `(batch, src_len)`
- features_only (bool, optional): skip LM head and just return
- features. If True, the output will be of shape
- `(batch, src_len, embed_dim)`.
- return_all_hiddens (bool, optional): also return all of the
- intermediate hidden states (default: False).
-
- Returns:
- tuple:
- - the LM output of shape `(batch, src_len, vocab)`
- - a dictionary of additional data, where 'inner_states'
- is a list of hidden states. Note that the hidden
- states have shape `(src_len, batch, vocab)`.
- """
- x, extra = self.extract_features(
- src_tokens, return_all_hiddens=return_all_hiddens
- )
- if not features_only:
- x = self.output_layer(x, masked_tokens=masked_tokens)
- return x, extra
-
- def extract_features(self, src_tokens, return_all_hiddens=False, **kwargs):
- encoder_out = self.sentence_encoder(
- src_tokens,
- return_all_hiddens=return_all_hiddens,
- token_embeddings=kwargs.get("token_embeddings", None),
- )
- # T x B x C -> B x T x C
- features = encoder_out["encoder_out"][0].transpose(0, 1)
- inner_states = encoder_out["encoder_states"] if return_all_hiddens else None
- return features, {"inner_states": inner_states}
-
- def output_layer(self, features, masked_tokens=None, **unused):
- return self.lm_head(features, masked_tokens)
-
- def max_positions(self):
- """Maximum output length supported by the encoder."""
- return self.args.max_positions
-
-
-@register_model_architecture("roberta", "roberta")
-def base_architecture(args):
- args.encoder_layers = safe_getattr(args, "encoder_layers", 12)
- args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 768)
- args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 3072)
- args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 12)
-
- args.dropout = safe_getattr(args, "dropout", 0.1)
- args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1)
- args.activation_dropout = safe_getattr(args, "activation_dropout", 0.0)
- args.pooler_dropout = safe_getattr(args, "pooler_dropout", 0.0)
-
- args.max_source_positions = safe_getattr(args, "max_positions", 512)
- args.no_token_positional_embeddings = safe_getattr(
- args, "no_token_positional_embeddings", False
- )
-
- # BERT has a few structural differences compared to the original Transformer
- args.encoder_learned_pos = safe_getattr(args, "encoder_learned_pos", True)
- args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", True)
- args.no_scale_embedding = safe_getattr(args, "no_scale_embedding", True)
- args.activation_fn = safe_getattr(args, "activation_fn", "gelu")
- args.encoder_normalize_before = safe_getattr(args, "encoder_normalize_before", False)
- args.pooler_activation_fn = safe_getattr(args, "pooler_activation_fn", "tanh")
- args.untie_weights_roberta = safe_getattr(args, "untie_weights_roberta", False)
-
- # Adaptive input config
- args.adaptive_input = safe_getattr(args, "adaptive_input", False)
-
- # LayerDrop config
- args.encoder_layerdrop = safe_getattr(args, "encoder_layerdrop", 0.0)
- args.encoder_layers_to_keep = safe_getattr(args, "encoder_layers_to_keep", None)
-
- # Quantization noise config
- args.quant_noise_pq = safe_getattr(args, "quant_noise_pq", 0)
- args.quant_noise_pq_block_size = safe_getattr(args, "quant_noise_pq_block_size", 8)
- args.quant_noise_scalar = safe_getattr(args, "quant_noise_scalar", 0)
-
- # R4F config
- args.spectral_norm_classification_head = safe_getattr(
- args, "spectral_norm_classification_head", False
- )
-
-
-@register_model_architecture("roberta", "roberta_prenorm")
-def roberta_prenorm_architecture(args):
- args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", False)
- args.encoder_normalize_before = safe_getattr(args, "encoder_normalize_before", True)
- base_architecture(args)
-
-
-@register_model_architecture("roberta", "roberta_base")
-def roberta_base_architecture(args):
- base_architecture(args)
-
-
-@register_model_architecture("roberta", "roberta_large")
-def roberta_large_architecture(args):
- args.encoder_layers = safe_getattr(args, "encoder_layers", 24)
- args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 16)
- base_architecture(args)
-
-
-@register_model_architecture("roberta", "xlm")
-def xlm_architecture(args):
- args.encoder_layers = safe_getattr(args, "encoder_layers", 16)
- args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 1280)
- args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 1280 * 4)
- args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 16)
- base_architecture(args)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/tasks/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/tasks/__init__.py
deleted file mode 100644
index 9a46b012c573a76e00e489307720fc3fa462c296..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/tasks/__init__.py
+++ /dev/null
@@ -1,136 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""isort:skip_file"""
-
-import argparse
-import importlib
-import os
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.utils import merge_with_parent
-from hydra.core.config_store import ConfigStore
-
-from .fairseq_task import FairseqTask, LegacyFairseqTask # noqa
-
-
-# register dataclass
-TASK_DATACLASS_REGISTRY = {}
-TASK_REGISTRY = {}
-TASK_CLASS_NAMES = set()
-
-
-def setup_task(cfg: FairseqDataclass, **kwargs):
- task = None
- task_name = getattr(cfg, "task", None)
-
- if isinstance(task_name, str):
- # legacy tasks
- task = TASK_REGISTRY[task_name]
- if task_name in TASK_DATACLASS_REGISTRY:
- dc = TASK_DATACLASS_REGISTRY[task_name]
- cfg = dc.from_namespace(cfg)
- else:
- task_name = getattr(cfg, "_name", None)
-
- if task_name and task_name in TASK_DATACLASS_REGISTRY:
- dc = TASK_DATACLASS_REGISTRY[task_name]
- cfg = merge_with_parent(dc(), cfg)
- task = TASK_REGISTRY[task_name]
-
- assert (
- task is not None
- ), f"Could not infer task type from {cfg}. Available argparse tasks: {TASK_REGISTRY.keys()}. Available hydra tasks: {TASK_DATACLASS_REGISTRY.keys()}"
-
- return task.setup_task(cfg, **kwargs)
-
-
-def register_task(name, dataclass=None):
- """
- New tasks can be added to fairseq with the
- :func:`~fairseq.tasks.register_task` function decorator.
-
- For example::
-
- @register_task('classification')
- class ClassificationTask(FairseqTask):
- (...)
-
- .. note::
-
- All Tasks must implement the :class:`~fairseq.tasks.FairseqTask`
- interface.
-
- Args:
- name (str): the name of the task
- """
-
- def register_task_cls(cls):
- if name in TASK_REGISTRY:
- raise ValueError("Cannot register duplicate task ({})".format(name))
- if not issubclass(cls, FairseqTask):
- raise ValueError(
- "Task ({}: {}) must extend FairseqTask".format(name, cls.__name__)
- )
- if cls.__name__ in TASK_CLASS_NAMES:
- raise ValueError(
- "Cannot register task with duplicate class name ({})".format(
- cls.__name__
- )
- )
- TASK_REGISTRY[name] = cls
- TASK_CLASS_NAMES.add(cls.__name__)
-
- if dataclass is not None and not issubclass(dataclass, FairseqDataclass):
- raise ValueError(
- "Dataclass {} must extend FairseqDataclass".format(dataclass)
- )
-
- cls.__dataclass = dataclass
- if dataclass is not None:
- TASK_DATACLASS_REGISTRY[name] = dataclass
-
- cs = ConfigStore.instance()
- node = dataclass()
- node._name = name
- cs.store(name=name, group="task", node=node, provider="fairseq")
-
- return cls
-
- return register_task_cls
-
-
-def get_task(name):
- return TASK_REGISTRY[name]
-
-
-def import_tasks(tasks_dir, namespace):
- for file in os.listdir(tasks_dir):
- path = os.path.join(tasks_dir, file)
- if (
- not file.startswith("_")
- and not file.startswith(".")
- and (file.endswith(".py") or os.path.isdir(path))
- ):
- task_name = file[: file.find(".py")] if file.endswith(".py") else file
- importlib.import_module(namespace + "." + task_name)
-
- # expose `task_parser` for sphinx
- if task_name in TASK_REGISTRY:
- parser = argparse.ArgumentParser(add_help=False)
- group_task = parser.add_argument_group("Task name")
- # fmt: off
- group_task.add_argument('--task', metavar=task_name,
- help='Enable this task with: ``--task=' + task_name + '``')
- # fmt: on
- group_args = parser.add_argument_group(
- "Additional command-line arguments"
- )
- TASK_REGISTRY[task_name].add_args(group_args)
- globals()[task_name + "_parser"] = parser
-
-
-# automatically import any Python files in the tasks/ directory
-tasks_dir = os.path.dirname(__file__)
-import_tasks(tasks_dir, "fairseq.tasks")
diff --git a/spaces/Ikaros521/moe-tts/text/sanskrit.py b/spaces/Ikaros521/moe-tts/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/moe-tts/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/ItsJayQz/GTA5_Artwork_Diffusion/app.py b/spaces/ItsJayQz/GTA5_Artwork_Diffusion/app.py
deleted file mode 100644
index b7c6b7f310646f720b115d1c0b270e4009548655..0000000000000000000000000000000000000000
--- a/spaces/ItsJayQz/GTA5_Artwork_Diffusion/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-
-model_id = 'ItsJayQz/GTA5_Artwork_Diffusion'
-prefix = 'gtav style'
-
-scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- pipe_i2i = pipe_i2i.to("cuda")
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False):
-
- generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
- prompt = f"{prefix} {prompt}" if auto_prefix else prompt
-
- try:
- if img is not None:
- return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
- else:
- return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None
- except Exception as e:
- return None, error_str(e)
-
-def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator):
-
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
-
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe_i2i(
- prompt,
- negative_prompt = neg_prompt,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
-
-
Gta5 Artwork Diffusion
-
-
- Demo for Gta5 Artwork Diffusion Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space
This is the base Jekyll theme. You can find out more info about customizing your Jekyll theme, as well as basic Jekyll usage documentation at jekyllrb.com
-
-
You can find the source code for Minima at GitHub:
-jekyll /
-minima
-
-
You can find the source code for Jekyll at GitHub:
-jekyll /
-jekyll
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/SeViLA/SeViLA/lavis/models/alpro_models/alpro_outputs.py b/spaces/SeViLA/SeViLA/lavis/models/alpro_models/alpro_outputs.py
deleted file mode 100644
index 68a11a9cfbd95c866597cf0e8d5a126134587de6..0000000000000000000000000000000000000000
--- a/spaces/SeViLA/SeViLA/lavis/models/alpro_models/alpro_outputs.py
+++ /dev/null
@@ -1,59 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-from dataclasses import dataclass
-from typing import Optional
-
-import torch
-from transformers.modeling_outputs import (
- BaseModelOutputWithPoolingAndCrossAttentions,
- ModelOutput,
-)
-
-
-@dataclass
-class AlproSimilarity(ModelOutput):
- sim_v2t: torch.FloatTensor = None
- sim_t2v: torch.FloatTensor = None
-
- sim_v2t_targets: Optional[torch.FloatTensor] = None
- sim_t2v_targets: Optional[torch.FloatTensor] = None
-
-
-@dataclass
-class AlproIntermediateOutput(ModelOutput):
- # uni-modal features
- video_embeds: torch.FloatTensor = None
- text_embeds: Optional[torch.FloatTensor] = None
-
- # intermediate outputs of multimodal encoder
- encoder_output: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None
- encoder_output_neg: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None
-
- vtm_logits: Optional[torch.FloatTensor] = None
- vtm_labels: Optional[torch.LongTensor] = None
-
-
-@dataclass
-class AlproOutput(ModelOutput):
- # some finetuned models (e.g. BlipVQA) do not compute similarity, thus optional.
- sims: Optional[AlproSimilarity] = None
-
- intermediate_output: AlproIntermediateOutput = None
-
- loss: Optional[torch.FloatTensor] = None
-
- loss_vtc: Optional[torch.FloatTensor] = None
-
- loss_vtm: Optional[torch.FloatTensor] = None
-
- loss_mlm: Optional[torch.FloatTensor] = None
-
-
-@dataclass
-class AlproOutputWithLogits(AlproOutput):
- logits: torch.FloatTensor = None
diff --git a/spaces/SeViLA/SeViLA/lavis/processors/functional_video.py b/spaces/SeViLA/SeViLA/lavis/processors/functional_video.py
deleted file mode 100644
index 597a29315d4e1a575e7209edb0618eeaf4fc024a..0000000000000000000000000000000000000000
--- a/spaces/SeViLA/SeViLA/lavis/processors/functional_video.py
+++ /dev/null
@@ -1,121 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import warnings
-
-import torch
-
-
-def _is_tensor_video_clip(clip):
- if not torch.is_tensor(clip):
- raise TypeError("clip should be Tensor. Got %s" % type(clip))
-
- if not clip.ndimension() == 4:
- raise ValueError("clip should be 4D. Got %dD" % clip.dim())
-
- return True
-
-
-def crop(clip, i, j, h, w):
- """
- Args:
- clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W)
- """
- if len(clip.size()) != 4:
- raise ValueError("clip should be a 4D tensor")
- return clip[..., i : i + h, j : j + w]
-
-
-def resize(clip, target_size, interpolation_mode):
- if len(target_size) != 2:
- raise ValueError(
- f"target size should be tuple (height, width), instead got {target_size}"
- )
- return torch.nn.functional.interpolate(
- clip, size=target_size, mode=interpolation_mode, align_corners=False
- )
-
-
-def resized_crop(clip, i, j, h, w, size, interpolation_mode="bilinear"):
- """
- Do spatial cropping and resizing to the video clip
- Args:
- clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W)
- i (int): i in (i,j) i.e coordinates of the upper left corner.
- j (int): j in (i,j) i.e coordinates of the upper left corner.
- h (int): Height of the cropped region.
- w (int): Width of the cropped region.
- size (tuple(int, int)): height and width of resized clip
- Returns:
- clip (torch.tensor): Resized and cropped clip. Size is (C, T, H, W)
- """
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- clip = crop(clip, i, j, h, w)
- clip = resize(clip, size, interpolation_mode)
- return clip
-
-
-def center_crop(clip, crop_size):
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- h, w = clip.size(-2), clip.size(-1)
- th, tw = crop_size
- if h < th or w < tw:
- raise ValueError("height and width must be no smaller than crop_size")
-
- i = int(round((h - th) / 2.0))
- j = int(round((w - tw) / 2.0))
- return crop(clip, i, j, th, tw)
-
-
-def to_tensor(clip):
- """
- Convert tensor data type from uint8 to float, divide value by 255.0 and
- permute the dimensions of clip tensor
- Args:
- clip (torch.tensor, dtype=torch.uint8): Size is (T, H, W, C)
- Return:
- clip (torch.tensor, dtype=torch.float): Size is (C, T, H, W)
- """
- _is_tensor_video_clip(clip)
- if not clip.dtype == torch.uint8:
- raise TypeError(
- "clip tensor should have data type uint8. Got %s" % str(clip.dtype)
- )
- return clip.float().permute(3, 0, 1, 2) / 255.0
-
-
-def normalize(clip, mean, std, inplace=False):
- """
- Args:
- clip (torch.tensor): Video clip to be normalized. Size is (C, T, H, W)
- mean (tuple): pixel RGB mean. Size is (3)
- std (tuple): pixel standard deviation. Size is (3)
- Returns:
- normalized clip (torch.tensor): Size is (C, T, H, W)
- """
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- if not inplace:
- clip = clip.clone()
- mean = torch.as_tensor(mean, dtype=clip.dtype, device=clip.device)
- std = torch.as_tensor(std, dtype=clip.dtype, device=clip.device)
- clip.sub_(mean[:, None, None, None]).div_(std[:, None, None, None])
- return clip
-
-
-def hflip(clip):
- """
- Args:
- clip (torch.tensor): Video clip to be normalized. Size is (C, T, H, W)
- Returns:
- flipped clip (torch.tensor): Size is (C, T, H, W)
- """
- if not _is_tensor_video_clip(clip):
- raise ValueError("clip should be a 4D torch.tensor")
- return clip.flip(-1)
diff --git a/spaces/ServerX/PorcoDiaz/diffq/__init__.py b/spaces/ServerX/PorcoDiaz/diffq/__init__.py
deleted file mode 100644
index 2b997ee4ed99a90cc43db7812383927e6fe1a3e8..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/diffq/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-"""
-This package implements different quantization strategies:
-
-- `diffq.uniform.UniformQuantizer`: classic uniform quantization over n bits.
-- `diffq.diffq.DiffQuantizer`: differentiable quantizer based on scaled noise injection.
-
-Also, do check `diffq.base.BaseQuantizer` for the common methods of all Quantizers.
-"""
-
-from .uniform import UniformQuantizer
-from .diffq import DiffQuantizer
diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/dataset.py b/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/dataset.py
deleted file mode 100644
index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/dataset.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import os
-import random
-
-import numpy as np
-import torch
-import torch.utils.data
-from tqdm import tqdm
-
-from . import spec_utils
-
-
-class VocalRemoverValidationSet(torch.utils.data.Dataset):
- def __init__(self, patch_list):
- self.patch_list = patch_list
-
- def __len__(self):
- return len(self.patch_list)
-
- def __getitem__(self, idx):
- path = self.patch_list[idx]
- data = np.load(path)
-
- X, y = data["X"], data["y"]
-
- X_mag = np.abs(X)
- y_mag = np.abs(y)
-
- return X_mag, y_mag
-
-
-def make_pair(mix_dir, inst_dir):
- input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"]
-
- X_list = sorted(
- [
- os.path.join(mix_dir, fname)
- for fname in os.listdir(mix_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
- y_list = sorted(
- [
- os.path.join(inst_dir, fname)
- for fname in os.listdir(inst_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
-
- filelist = list(zip(X_list, y_list))
-
- return filelist
-
-
-def train_val_split(dataset_dir, split_mode, val_rate, val_filelist):
- if split_mode == "random":
- filelist = make_pair(
- os.path.join(dataset_dir, "mixtures"),
- os.path.join(dataset_dir, "instruments"),
- )
-
- random.shuffle(filelist)
-
- if len(val_filelist) == 0:
- val_size = int(len(filelist) * val_rate)
- train_filelist = filelist[:-val_size]
- val_filelist = filelist[-val_size:]
- else:
- train_filelist = [
- pair for pair in filelist if list(pair) not in val_filelist
- ]
- elif split_mode == "subdirs":
- if len(val_filelist) != 0:
- raise ValueError(
- "The `val_filelist` option is not available in `subdirs` mode"
- )
-
- train_filelist = make_pair(
- os.path.join(dataset_dir, "training/mixtures"),
- os.path.join(dataset_dir, "training/instruments"),
- )
-
- val_filelist = make_pair(
- os.path.join(dataset_dir, "validation/mixtures"),
- os.path.join(dataset_dir, "validation/instruments"),
- )
-
- return train_filelist, val_filelist
-
-
-def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha):
- perm = np.random.permutation(len(X))
- for i, idx in enumerate(tqdm(perm)):
- if np.random.uniform() < reduction_rate:
- y[idx] = spec_utils.reduce_vocal_aggressively(
- X[idx], y[idx], reduction_mask
- )
-
- if np.random.uniform() < 0.5:
- # swap channel
- X[idx] = X[idx, ::-1]
- y[idx] = y[idx, ::-1]
- if np.random.uniform() < 0.02:
- # mono
- X[idx] = X[idx].mean(axis=0, keepdims=True)
- y[idx] = y[idx].mean(axis=0, keepdims=True)
- if np.random.uniform() < 0.02:
- # inst
- X[idx] = y[idx]
-
- if np.random.uniform() < mixup_rate and i < len(perm) - 1:
- lam = np.random.beta(mixup_alpha, mixup_alpha)
- X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]]
- y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]]
-
- return X, y
-
-
-def make_padding(width, cropsize, offset):
- left = offset
- roi_size = cropsize - left * 2
- if roi_size == 0:
- roi_size = cropsize
- right = roi_size - (width % roi_size) + left
-
- return left, right, roi_size
-
-
-def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset):
- len_dataset = patches * len(filelist)
-
- X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
- y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches)
- ends = starts + cropsize
- for j in range(patches):
- idx = i * patches + j
- X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]]
- y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]]
-
- return X_dataset, y_dataset
-
-
-def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset):
- patch_list = []
- patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format(
- cropsize, sr, hop_length, n_fft, offset
- )
- os.makedirs(patch_dir, exist_ok=True)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- basename = os.path.splitext(os.path.basename(X_path))[0]
-
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- len_dataset = int(np.ceil(X.shape[2] / roi_size))
- for j in range(len_dataset):
- outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j))
- start = j * roi_size
- if not os.path.exists(outpath):
- np.savez(
- outpath,
- X=X_pad[:, :, start : start + cropsize],
- y=y_pad[:, :, start : start + cropsize],
- )
- patch_list.append(outpath)
-
- return VocalRemoverValidationSet(patch_list)
diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py
deleted file mode 100644
index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/ServerX/PorcoDiaz/utils/i18n.py b/spaces/ServerX/PorcoDiaz/utils/i18n.py
deleted file mode 100644
index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/utils/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = "es_ES"
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "es_ES"
- language = "es_ES"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- # print("Use Language:", self.language)
- print("")
diff --git a/spaces/SmileyTatsu/Smile/greeting.md b/spaces/SmileyTatsu/Smile/greeting.md
deleted file mode 100644
index ab93c62e2dd733bf088b9dedcb847c7523d66480..0000000000000000000000000000000000000000
--- a/spaces/SmileyTatsu/Smile/greeting.md
+++ /dev/null
@@ -1,4 +0,0 @@
-
-Too busy to rn, opening again when I have more free time
-
-Smile! https://rentry.org/SmileyTatsu <- My cute bots
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magic.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magic.py
deleted file mode 100644
index 4f9e4e548f734a06e45928b925f804993f94251f..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magic.py
+++ /dev/null
@@ -1,757 +0,0 @@
-# encoding: utf-8
-"""Magic functions for InteractiveShell.
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (C) 2001 Janko Hauser and
-# Copyright (C) 2001 Fernando Perez
-# Copyright (C) 2008 The IPython Development Team
-
-# Distributed under the terms of the BSD License. The full license is in
-# the file COPYING, distributed as part of this software.
-#-----------------------------------------------------------------------------
-
-import os
-import re
-import sys
-from getopt import getopt, GetoptError
-
-from traitlets.config.configurable import Configurable
-from . import oinspect
-from .error import UsageError
-from .inputtransformer2 import ESC_MAGIC, ESC_MAGIC2
-from ..utils.ipstruct import Struct
-from ..utils.process import arg_split
-from ..utils.text import dedent
-from traitlets import Bool, Dict, Instance, observe
-from logging import error
-
-#-----------------------------------------------------------------------------
-# Globals
-#-----------------------------------------------------------------------------
-
-# A dict we'll use for each class that has magics, used as temporary storage to
-# pass information between the @line/cell_magic method decorators and the
-# @magics_class class decorator, because the method decorators have no
-# access to the class when they run. See for more details:
-# http://stackoverflow.com/questions/2366713/can-a-python-decorator-of-an-instance-method-access-the-class
-
-magics = dict(line={}, cell={})
-
-magic_kinds = ('line', 'cell')
-magic_spec = ('line', 'cell', 'line_cell')
-magic_escapes = dict(line=ESC_MAGIC, cell=ESC_MAGIC2)
-
-#-----------------------------------------------------------------------------
-# Utility classes and functions
-#-----------------------------------------------------------------------------
-
-class Bunch: pass
-
-
-def on_off(tag):
- """Return an ON/OFF string for a 1/0 input. Simple utility function."""
- return ['OFF','ON'][tag]
-
-
-def compress_dhist(dh):
- """Compress a directory history into a new one with at most 20 entries.
-
- Return a new list made from the first and last 10 elements of dhist after
- removal of duplicates.
- """
- head, tail = dh[:-10], dh[-10:]
-
- newhead = []
- done = set()
- for h in head:
- if h in done:
- continue
- newhead.append(h)
- done.add(h)
-
- return newhead + tail
-
-
-def needs_local_scope(func):
- """Decorator to mark magic functions which need to local scope to run."""
- func.needs_local_scope = True
- return func
-
-#-----------------------------------------------------------------------------
-# Class and method decorators for registering magics
-#-----------------------------------------------------------------------------
-
-def magics_class(cls):
- """Class decorator for all subclasses of the main Magics class.
-
- Any class that subclasses Magics *must* also apply this decorator, to
- ensure that all the methods that have been decorated as line/cell magics
- get correctly registered in the class instance. This is necessary because
- when method decorators run, the class does not exist yet, so they
- temporarily store their information into a module global. Application of
- this class decorator copies that global data to the class instance and
- clears the global.
-
- Obviously, this mechanism is not thread-safe, which means that the
- *creation* of subclasses of Magic should only be done in a single-thread
- context. Instantiation of the classes has no restrictions. Given that
- these classes are typically created at IPython startup time and before user
- application code becomes active, in practice this should not pose any
- problems.
- """
- cls.registered = True
- cls.magics = dict(line = magics['line'],
- cell = magics['cell'])
- magics['line'] = {}
- magics['cell'] = {}
- return cls
-
-
-def record_magic(dct, magic_kind, magic_name, func):
- """Utility function to store a function as a magic of a specific kind.
-
- Parameters
- ----------
- dct : dict
- A dictionary with 'line' and 'cell' subdicts.
- magic_kind : str
- Kind of magic to be stored.
- magic_name : str
- Key to store the magic as.
- func : function
- Callable object to store.
- """
- if magic_kind == 'line_cell':
- dct['line'][magic_name] = dct['cell'][magic_name] = func
- else:
- dct[magic_kind][magic_name] = func
-
-
-def validate_type(magic_kind):
- """Ensure that the given magic_kind is valid.
-
- Check that the given magic_kind is one of the accepted spec types (stored
- in the global `magic_spec`), raise ValueError otherwise.
- """
- if magic_kind not in magic_spec:
- raise ValueError('magic_kind must be one of %s, %s given' %
- magic_kinds, magic_kind)
-
-
-# The docstrings for the decorator below will be fairly similar for the two
-# types (method and function), so we generate them here once and reuse the
-# templates below.
-_docstring_template = \
-"""Decorate the given {0} as {1} magic.
-
-The decorator can be used with or without arguments, as follows.
-
-i) without arguments: it will create a {1} magic named as the {0} being
-decorated::
-
- @deco
- def foo(...)
-
-will create a {1} magic named `foo`.
-
-ii) with one string argument: which will be used as the actual name of the
-resulting magic::
-
- @deco('bar')
- def foo(...)
-
-will create a {1} magic named `bar`.
-
-To register a class magic use ``Interactiveshell.register_magic(class or instance)``.
-"""
-
-# These two are decorator factories. While they are conceptually very similar,
-# there are enough differences in the details that it's simpler to have them
-# written as completely standalone functions rather than trying to share code
-# and make a single one with convoluted logic.
-
-def _method_magic_marker(magic_kind):
- """Decorator factory for methods in Magics subclasses.
- """
-
- validate_type(magic_kind)
-
- # This is a closure to capture the magic_kind. We could also use a class,
- # but it's overkill for just that one bit of state.
- def magic_deco(arg):
- if callable(arg):
- # "Naked" decorator call (just @foo, no args)
- func = arg
- name = func.__name__
- retval = arg
- record_magic(magics, magic_kind, name, name)
- elif isinstance(arg, str):
- # Decorator called with arguments (@foo('bar'))
- name = arg
- def mark(func, *a, **kw):
- record_magic(magics, magic_kind, name, func.__name__)
- return func
- retval = mark
- else:
- raise TypeError("Decorator can only be called with "
- "string or function")
- return retval
-
- # Ensure the resulting decorator has a usable docstring
- magic_deco.__doc__ = _docstring_template.format('method', magic_kind)
- return magic_deco
-
-
-def _function_magic_marker(magic_kind):
- """Decorator factory for standalone functions.
- """
- validate_type(magic_kind)
-
- # This is a closure to capture the magic_kind. We could also use a class,
- # but it's overkill for just that one bit of state.
- def magic_deco(arg):
- # Find get_ipython() in the caller's namespace
- caller = sys._getframe(1)
- for ns in ['f_locals', 'f_globals', 'f_builtins']:
- get_ipython = getattr(caller, ns).get('get_ipython')
- if get_ipython is not None:
- break
- else:
- raise NameError('Decorator can only run in context where '
- '`get_ipython` exists')
-
- ip = get_ipython()
-
- if callable(arg):
- # "Naked" decorator call (just @foo, no args)
- func = arg
- name = func.__name__
- ip.register_magic_function(func, magic_kind, name)
- retval = arg
- elif isinstance(arg, str):
- # Decorator called with arguments (@foo('bar'))
- name = arg
- def mark(func, *a, **kw):
- ip.register_magic_function(func, magic_kind, name)
- return func
- retval = mark
- else:
- raise TypeError("Decorator can only be called with "
- "string or function")
- return retval
-
- # Ensure the resulting decorator has a usable docstring
- ds = _docstring_template.format('function', magic_kind)
-
- ds += dedent("""
- Note: this decorator can only be used in a context where IPython is already
- active, so that the `get_ipython()` call succeeds. You can therefore use
- it in your startup files loaded after IPython initializes, but *not* in the
- IPython configuration file itself, which is executed before IPython is
- fully up and running. Any file located in the `startup` subdirectory of
- your configuration profile will be OK in this sense.
- """)
-
- magic_deco.__doc__ = ds
- return magic_deco
-
-
-MAGIC_NO_VAR_EXPAND_ATTR = "_ipython_magic_no_var_expand"
-MAGIC_OUTPUT_CAN_BE_SILENCED = "_ipython_magic_output_can_be_silenced"
-
-
-def no_var_expand(magic_func):
- """Mark a magic function as not needing variable expansion
-
- By default, IPython interprets `{a}` or `$a` in the line passed to magics
- as variables that should be interpolated from the interactive namespace
- before passing the line to the magic function.
- This is not always desirable, e.g. when the magic executes Python code
- (%timeit, %time, etc.).
- Decorate magics with `@no_var_expand` to opt-out of variable expansion.
-
- .. versionadded:: 7.3
- """
- setattr(magic_func, MAGIC_NO_VAR_EXPAND_ATTR, True)
- return magic_func
-
-
-def output_can_be_silenced(magic_func):
- """Mark a magic function so its output may be silenced.
-
- The output is silenced if the Python code used as a parameter of
- the magic ends in a semicolon, not counting a Python comment that can
- follow it.
- """
- setattr(magic_func, MAGIC_OUTPUT_CAN_BE_SILENCED, True)
- return magic_func
-
-# Create the actual decorators for public use
-
-# These three are used to decorate methods in class definitions
-line_magic = _method_magic_marker('line')
-cell_magic = _method_magic_marker('cell')
-line_cell_magic = _method_magic_marker('line_cell')
-
-# These three decorate standalone functions and perform the decoration
-# immediately. They can only run where get_ipython() works
-register_line_magic = _function_magic_marker('line')
-register_cell_magic = _function_magic_marker('cell')
-register_line_cell_magic = _function_magic_marker('line_cell')
-
-#-----------------------------------------------------------------------------
-# Core Magic classes
-#-----------------------------------------------------------------------------
-
-class MagicsManager(Configurable):
- """Object that handles all magic-related functionality for IPython.
- """
- # Non-configurable class attributes
-
- # A two-level dict, first keyed by magic type, then by magic function, and
- # holding the actual callable object as value. This is the dict used for
- # magic function dispatch
- magics = Dict()
- lazy_magics = Dict(
- help="""
- Mapping from magic names to modules to load.
-
- This can be used in IPython/IPykernel configuration to declare lazy magics
- that will only be imported/registered on first use.
-
- For example::
-
- c.MagicsManager.lazy_magics = {
- "my_magic": "slow.to.import",
- "my_other_magic": "also.slow",
- }
-
- On first invocation of `%my_magic`, `%%my_magic`, `%%my_other_magic` or
- `%%my_other_magic`, the corresponding module will be loaded as an ipython
- extensions as if you had previously done `%load_ext ipython`.
-
- Magics names should be without percent(s) as magics can be both cell
- and line magics.
-
- Lazy loading happen relatively late in execution process, and
- complex extensions that manipulate Python/IPython internal state or global state
- might not support lazy loading.
- """
- ).tag(
- config=True,
- )
-
- # A registry of the original objects that we've been given holding magics.
- registry = Dict()
-
- shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', allow_none=True)
-
- auto_magic = Bool(True, help=
- "Automatically call line magics without requiring explicit % prefix"
- ).tag(config=True)
- @observe('auto_magic')
- def _auto_magic_changed(self, change):
- self.shell.automagic = change['new']
-
- _auto_status = [
- 'Automagic is OFF, % prefix IS needed for line magics.',
- 'Automagic is ON, % prefix IS NOT needed for line magics.']
-
- user_magics = Instance('IPython.core.magics.UserMagics', allow_none=True)
-
- def __init__(self, shell=None, config=None, user_magics=None, **traits):
-
- super(MagicsManager, self).__init__(shell=shell, config=config,
- user_magics=user_magics, **traits)
- self.magics = dict(line={}, cell={})
- # Let's add the user_magics to the registry for uniformity, so *all*
- # registered magic containers can be found there.
- self.registry[user_magics.__class__.__name__] = user_magics
-
- def auto_status(self):
- """Return descriptive string with automagic status."""
- return self._auto_status[self.auto_magic]
-
- def lsmagic(self):
- """Return a dict of currently available magic functions.
-
- The return dict has the keys 'line' and 'cell', corresponding to the
- two types of magics we support. Each value is a list of names.
- """
- return self.magics
-
- def lsmagic_docs(self, brief=False, missing=''):
- """Return dict of documentation of magic functions.
-
- The return dict has the keys 'line' and 'cell', corresponding to the
- two types of magics we support. Each value is a dict keyed by magic
- name whose value is the function docstring. If a docstring is
- unavailable, the value of `missing` is used instead.
-
- If brief is True, only the first line of each docstring will be returned.
- """
- docs = {}
- for m_type in self.magics:
- m_docs = {}
- for m_name, m_func in self.magics[m_type].items():
- if m_func.__doc__:
- if brief:
- m_docs[m_name] = m_func.__doc__.split('\n', 1)[0]
- else:
- m_docs[m_name] = m_func.__doc__.rstrip()
- else:
- m_docs[m_name] = missing
- docs[m_type] = m_docs
- return docs
-
- def register_lazy(self, name: str, fully_qualified_name: str):
- """
- Lazily register a magic via an extension.
-
-
- Parameters
- ----------
- name : str
- Name of the magic you wish to register.
- fully_qualified_name :
- Fully qualified name of the module/submodule that should be loaded
- as an extensions when the magic is first called.
- It is assumed that loading this extensions will register the given
- magic.
- """
-
- self.lazy_magics[name] = fully_qualified_name
-
- def register(self, *magic_objects):
- """Register one or more instances of Magics.
-
- Take one or more classes or instances of classes that subclass the main
- `core.Magic` class, and register them with IPython to use the magic
- functions they provide. The registration process will then ensure that
- any methods that have decorated to provide line and/or cell magics will
- be recognized with the `%x`/`%%x` syntax as a line/cell magic
- respectively.
-
- If classes are given, they will be instantiated with the default
- constructor. If your classes need a custom constructor, you should
- instanitate them first and pass the instance.
-
- The provided arguments can be an arbitrary mix of classes and instances.
-
- Parameters
- ----------
- *magic_objects : one or more classes or instances
- """
- # Start by validating them to ensure they have all had their magic
- # methods registered at the instance level
- for m in magic_objects:
- if not m.registered:
- raise ValueError("Class of magics %r was constructed without "
- "the @register_magics class decorator")
- if isinstance(m, type):
- # If we're given an uninstantiated class
- m = m(shell=self.shell)
-
- # Now that we have an instance, we can register it and update the
- # table of callables
- self.registry[m.__class__.__name__] = m
- for mtype in magic_kinds:
- self.magics[mtype].update(m.magics[mtype])
-
- def register_function(self, func, magic_kind='line', magic_name=None):
- """Expose a standalone function as magic function for IPython.
-
- This will create an IPython magic (line, cell or both) from a
- standalone function. The functions should have the following
- signatures:
-
- * For line magics: `def f(line)`
- * For cell magics: `def f(line, cell)`
- * For a function that does both: `def f(line, cell=None)`
-
- In the latter case, the function will be called with `cell==None` when
- invoked as `%f`, and with cell as a string when invoked as `%%f`.
-
- Parameters
- ----------
- func : callable
- Function to be registered as a magic.
- magic_kind : str
- Kind of magic, one of 'line', 'cell' or 'line_cell'
- magic_name : optional str
- If given, the name the magic will have in the IPython namespace. By
- default, the name of the function itself is used.
- """
-
- # Create the new method in the user_magics and register it in the
- # global table
- validate_type(magic_kind)
- magic_name = func.__name__ if magic_name is None else magic_name
- setattr(self.user_magics, magic_name, func)
- record_magic(self.magics, magic_kind, magic_name, func)
-
- def register_alias(self, alias_name, magic_name, magic_kind='line', magic_params=None):
- """Register an alias to a magic function.
-
- The alias is an instance of :class:`MagicAlias`, which holds the
- name and kind of the magic it should call. Binding is done at
- call time, so if the underlying magic function is changed the alias
- will call the new function.
-
- Parameters
- ----------
- alias_name : str
- The name of the magic to be registered.
- magic_name : str
- The name of an existing magic.
- magic_kind : str
- Kind of magic, one of 'line' or 'cell'
- """
-
- # `validate_type` is too permissive, as it allows 'line_cell'
- # which we do not handle.
- if magic_kind not in magic_kinds:
- raise ValueError('magic_kind must be one of %s, %s given' %
- magic_kinds, magic_kind)
-
- alias = MagicAlias(self.shell, magic_name, magic_kind, magic_params)
- setattr(self.user_magics, alias_name, alias)
- record_magic(self.magics, magic_kind, alias_name, alias)
-
-# Key base class that provides the central functionality for magics.
-
-
-class Magics(Configurable):
- """Base class for implementing magic functions.
-
- Shell functions which can be reached as %function_name. All magic
- functions should accept a string, which they can parse for their own
- needs. This can make some functions easier to type, eg `%cd ../`
- vs. `%cd("../")`
-
- Classes providing magic functions need to subclass this class, and they
- MUST:
-
- - Use the method decorators `@line_magic` and `@cell_magic` to decorate
- individual methods as magic functions, AND
-
- - Use the class decorator `@magics_class` to ensure that the magic
- methods are properly registered at the instance level upon instance
- initialization.
-
- See :mod:`magic_functions` for examples of actual implementation classes.
- """
- # Dict holding all command-line options for each magic.
- options_table = None
- # Dict for the mapping of magic names to methods, set by class decorator
- magics = None
- # Flag to check that the class decorator was properly applied
- registered = False
- # Instance of IPython shell
- shell = None
-
- def __init__(self, shell=None, **kwargs):
- if not(self.__class__.registered):
- raise ValueError('Magics subclass without registration - '
- 'did you forget to apply @magics_class?')
- if shell is not None:
- if hasattr(shell, 'configurables'):
- shell.configurables.append(self)
- if hasattr(shell, 'config'):
- kwargs.setdefault('parent', shell)
-
- self.shell = shell
- self.options_table = {}
- # The method decorators are run when the instance doesn't exist yet, so
- # they can only record the names of the methods they are supposed to
- # grab. Only now, that the instance exists, can we create the proper
- # mapping to bound methods. So we read the info off the original names
- # table and replace each method name by the actual bound method.
- # But we mustn't clobber the *class* mapping, in case of multiple instances.
- class_magics = self.magics
- self.magics = {}
- for mtype in magic_kinds:
- tab = self.magics[mtype] = {}
- cls_tab = class_magics[mtype]
- for magic_name, meth_name in cls_tab.items():
- if isinstance(meth_name, str):
- # it's a method name, grab it
- tab[magic_name] = getattr(self, meth_name)
- else:
- # it's the real thing
- tab[magic_name] = meth_name
- # Configurable **needs** to be initiated at the end or the config
- # magics get screwed up.
- super(Magics, self).__init__(**kwargs)
-
- def arg_err(self,func):
- """Print docstring if incorrect arguments were passed"""
- print('Error in arguments:')
- print(oinspect.getdoc(func))
-
- def format_latex(self, strng):
- """Format a string for latex inclusion."""
-
- # Characters that need to be escaped for latex:
- escape_re = re.compile(r'(%|_|\$|#|&)',re.MULTILINE)
- # Magic command names as headers:
- cmd_name_re = re.compile(r'^(%s.*?):' % ESC_MAGIC,
- re.MULTILINE)
- # Magic commands
- cmd_re = re.compile(r'(?P%s.+?\b)(?!\}\}:)' % ESC_MAGIC,
- re.MULTILINE)
- # Paragraph continue
- par_re = re.compile(r'\\$',re.MULTILINE)
-
- # The "\n" symbol
- newline_re = re.compile(r'\\n')
-
- # Now build the string for output:
- #strng = cmd_name_re.sub(r'\n\\texttt{\\textsl{\\large \1}}:',strng)
- strng = cmd_name_re.sub(r'\n\\bigskip\n\\texttt{\\textbf{ \1}}:',
- strng)
- strng = cmd_re.sub(r'\\texttt{\g}',strng)
- strng = par_re.sub(r'\\\\',strng)
- strng = escape_re.sub(r'\\\1',strng)
- strng = newline_re.sub(r'\\textbackslash{}n',strng)
- return strng
-
- def parse_options(self, arg_str, opt_str, *long_opts, **kw):
- """Parse options passed to an argument string.
-
- The interface is similar to that of :func:`getopt.getopt`, but it
- returns a :class:`~IPython.utils.struct.Struct` with the options as keys
- and the stripped argument string still as a string.
-
- arg_str is quoted as a true sys.argv vector by using shlex.split.
- This allows us to easily expand variables, glob files, quote
- arguments, etc.
-
- Parameters
- ----------
- arg_str : str
- The arguments to parse.
- opt_str : str
- The options specification.
- mode : str, default 'string'
- If given as 'list', the argument string is returned as a list (split
- on whitespace) instead of a string.
- list_all : bool, default False
- Put all option values in lists. Normally only options
- appearing more than once are put in a list.
- posix : bool, default True
- Whether to split the input line in POSIX mode or not, as per the
- conventions outlined in the :mod:`shlex` module from the standard
- library.
- """
-
- # inject default options at the beginning of the input line
- caller = sys._getframe(1).f_code.co_name
- arg_str = '%s %s' % (self.options_table.get(caller,''),arg_str)
-
- mode = kw.get('mode','string')
- if mode not in ['string','list']:
- raise ValueError('incorrect mode given: %s' % mode)
- # Get options
- list_all = kw.get('list_all',0)
- posix = kw.get('posix', os.name == 'posix')
- strict = kw.get('strict', True)
-
- preserve_non_opts = kw.get("preserve_non_opts", False)
- remainder_arg_str = arg_str
-
- # Check if we have more than one argument to warrant extra processing:
- odict = {} # Dictionary with options
- args = arg_str.split()
- if len(args) >= 1:
- # If the list of inputs only has 0 or 1 thing in it, there's no
- # need to look for options
- argv = arg_split(arg_str, posix, strict)
- # Do regular option processing
- try:
- opts,args = getopt(argv, opt_str, long_opts)
- except GetoptError as e:
- raise UsageError(
- '%s ( allowed: "%s" %s)' % (e.msg, opt_str, " ".join(long_opts))
- ) from e
- for o, a in opts:
- if mode == "string" and preserve_non_opts:
- # remove option-parts from the original args-string and preserve remaining-part.
- # This relies on the arg_split(...) and getopt(...)'s impl spec, that the parsed options are
- # returned in the original order.
- remainder_arg_str = remainder_arg_str.replace(o, "", 1).replace(
- a, "", 1
- )
- if o.startswith("--"):
- o = o[2:]
- else:
- o = o[1:]
- try:
- odict[o].append(a)
- except AttributeError:
- odict[o] = [odict[o],a]
- except KeyError:
- if list_all:
- odict[o] = [a]
- else:
- odict[o] = a
-
- # Prepare opts,args for return
- opts = Struct(odict)
- if mode == 'string':
- if preserve_non_opts:
- args = remainder_arg_str.lstrip()
- else:
- args = " ".join(args)
-
- return opts,args
-
- def default_option(self, fn, optstr):
- """Make an entry in the options_table for fn, with value optstr"""
-
- if fn not in self.lsmagic():
- error("%s is not a magic function" % fn)
- self.options_table[fn] = optstr
-
-
-class MagicAlias(object):
- """An alias to another magic function.
-
- An alias is determined by its magic name and magic kind. Lookup
- is done at call time, so if the underlying magic changes the alias
- will call the new function.
-
- Use the :meth:`MagicsManager.register_alias` method or the
- `%alias_magic` magic function to create and register a new alias.
- """
- def __init__(self, shell, magic_name, magic_kind, magic_params=None):
- self.shell = shell
- self.magic_name = magic_name
- self.magic_params = magic_params
- self.magic_kind = magic_kind
-
- self.pretty_target = '%s%s' % (magic_escapes[self.magic_kind], self.magic_name)
- self.__doc__ = "Alias for `%s`." % self.pretty_target
-
- self._in_call = False
-
- def __call__(self, *args, **kwargs):
- """Call the magic alias."""
- fn = self.shell.find_magic(self.magic_name, self.magic_kind)
- if fn is None:
- raise UsageError("Magic `%s` not found." % self.pretty_target)
-
- # Protect against infinite recursion.
- if self._in_call:
- raise UsageError("Infinite recursion detected; "
- "magic aliases cannot call themselves.")
- self._in_call = True
- try:
- if self.magic_params:
- args_list = list(args)
- args_list[0] = self.magic_params + " " + args[0]
- args = tuple(args_list)
- return fn(*args, **kwargs)
- finally:
- self._in_call = False
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/osm.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/osm.py
deleted file mode 100644
index f64f1bce6ae022804683f031d6c63db0201fa779..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/osm.py
+++ /dev/null
@@ -1,855 +0,0 @@
-"""Implementation of magic functions for interaction with the OS.
-
-Note: this module is named 'osm' instead of 'os' to avoid a collision with the
-builtin.
-"""
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-import io
-import os
-import pathlib
-import re
-import sys
-from pprint import pformat
-
-from IPython.core import magic_arguments
-from IPython.core import oinspect
-from IPython.core import page
-from IPython.core.alias import AliasError, Alias
-from IPython.core.error import UsageError
-from IPython.core.magic import (
- Magics, compress_dhist, magics_class, line_magic, cell_magic, line_cell_magic
-)
-from IPython.testing.skipdoctest import skip_doctest
-from IPython.utils.openpy import source_to_unicode
-from IPython.utils.process import abbrev_cwd
-from IPython.utils.terminal import set_term_title
-from traitlets import Bool
-from warnings import warn
-
-
-@magics_class
-class OSMagics(Magics):
- """Magics to interact with the underlying OS (shell-type functionality).
- """
-
- cd_force_quiet = Bool(False,
- help="Force %cd magic to be quiet even if -q is not passed."
- ).tag(config=True)
-
- def __init__(self, shell=None, **kwargs):
-
- # Now define isexec in a cross platform manner.
- self.is_posix = False
- self.execre = None
- if os.name == 'posix':
- self.is_posix = True
- else:
- try:
- winext = os.environ['pathext'].replace(';','|').replace('.','')
- except KeyError:
- winext = 'exe|com|bat|py'
- try:
- self.execre = re.compile(r'(.*)\.(%s)$' % winext,re.IGNORECASE)
- except re.error:
- warn("Seems like your pathext environmental "
- "variable is malformed. Please check it to "
- "enable a proper handle of file extensions "
- "managed for your system")
- winext = 'exe|com|bat|py'
- self.execre = re.compile(r'(.*)\.(%s)$' % winext,re.IGNORECASE)
-
- # call up the chain
- super().__init__(shell=shell, **kwargs)
-
-
- def _isexec_POSIX(self, file):
- """
- Test for executable on a POSIX system
- """
- if os.access(file.path, os.X_OK):
- # will fail on maxOS if access is not X_OK
- return file.is_file()
- return False
-
-
-
- def _isexec_WIN(self, file):
- """
- Test for executable file on non POSIX system
- """
- return file.is_file() and self.execre.match(file.name) is not None
-
- def isexec(self, file):
- """
- Test for executable file on non POSIX system
- """
- if self.is_posix:
- return self._isexec_POSIX(file)
- else:
- return self._isexec_WIN(file)
-
-
- @skip_doctest
- @line_magic
- def alias(self, parameter_s=''):
- """Define an alias for a system command.
-
- '%alias alias_name cmd' defines 'alias_name' as an alias for 'cmd'
-
- Then, typing 'alias_name params' will execute the system command 'cmd
- params' (from your underlying operating system).
-
- Aliases have lower precedence than magic functions and Python normal
- variables, so if 'foo' is both a Python variable and an alias, the
- alias can not be executed until 'del foo' removes the Python variable.
-
- You can use the %l specifier in an alias definition to represent the
- whole line when the alias is called. For example::
-
- In [2]: alias bracket echo "Input in brackets: <%l>"
- In [3]: bracket hello world
- Input in brackets:
-
- You can also define aliases with parameters using %s specifiers (one
- per parameter)::
-
- In [1]: alias parts echo first %s second %s
- In [2]: %parts A B
- first A second B
- In [3]: %parts A
- Incorrect number of arguments: 2 expected.
- parts is an alias to: 'echo first %s second %s'
-
- Note that %l and %s are mutually exclusive. You can only use one or
- the other in your aliases.
-
- Aliases expand Python variables just like system calls using ! or !!
- do: all expressions prefixed with '$' get expanded. For details of
- the semantic rules, see PEP-215:
- https://peps.python.org/pep-0215/. This is the library used by
- IPython for variable expansion. If you want to access a true shell
- variable, an extra $ is necessary to prevent its expansion by
- IPython::
-
- In [6]: alias show echo
- In [7]: PATH='A Python string'
- In [8]: show $PATH
- A Python string
- In [9]: show $$PATH
- /usr/local/lf9560/bin:/usr/local/intel/compiler70/ia32/bin:...
-
- You can use the alias facility to access all of $PATH. See the %rehashx
- function, which automatically creates aliases for the contents of your
- $PATH.
-
- If called with no parameters, %alias prints the current alias table
- for your system. For posix systems, the default aliases are 'cat',
- 'cp', 'mv', 'rm', 'rmdir', and 'mkdir', and other platform-specific
- aliases are added. For windows-based systems, the default aliases are
- 'copy', 'ddir', 'echo', 'ls', 'ldir', 'mkdir', 'ren', and 'rmdir'.
-
- You can see the definition of alias by adding a question mark in the
- end::
-
- In [1]: cat?
- Repr: """
-
- par = parameter_s.strip()
- if not par:
- aliases = sorted(self.shell.alias_manager.aliases)
- # stored = self.shell.db.get('stored_aliases', {} )
- # for k, v in stored:
- # atab.append(k, v[0])
-
- print("Total number of aliases:", len(aliases))
- sys.stdout.flush()
- return aliases
-
- # Now try to define a new one
- try:
- alias,cmd = par.split(None, 1)
- except TypeError:
- print(oinspect.getdoc(self.alias))
- return
-
- try:
- self.shell.alias_manager.define_alias(alias, cmd)
- except AliasError as e:
- print(e)
- # end magic_alias
-
- @line_magic
- def unalias(self, parameter_s=''):
- """Remove an alias"""
-
- aname = parameter_s.strip()
- try:
- self.shell.alias_manager.undefine_alias(aname)
- except ValueError as e:
- print(e)
- return
-
- stored = self.shell.db.get('stored_aliases', {} )
- if aname in stored:
- print("Removing %stored alias",aname)
- del stored[aname]
- self.shell.db['stored_aliases'] = stored
-
- @line_magic
- def rehashx(self, parameter_s=''):
- """Update the alias table with all executable files in $PATH.
-
- rehashx explicitly checks that every entry in $PATH is a file
- with execute access (os.X_OK).
-
- Under Windows, it checks executability as a match against a
- '|'-separated string of extensions, stored in the IPython config
- variable win_exec_ext. This defaults to 'exe|com|bat'.
-
- This function also resets the root module cache of module completer,
- used on slow filesystems.
- """
- from IPython.core.alias import InvalidAliasError
-
- # for the benefit of module completer in ipy_completers.py
- del self.shell.db['rootmodules_cache']
-
- path = [os.path.abspath(os.path.expanduser(p)) for p in
- os.environ.get('PATH','').split(os.pathsep)]
-
- syscmdlist = []
- savedir = os.getcwd()
-
- # Now walk the paths looking for executables to alias.
- try:
- # write the whole loop for posix/Windows so we don't have an if in
- # the innermost part
- if self.is_posix:
- for pdir in path:
- try:
- os.chdir(pdir)
- except OSError:
- continue
-
- # for python 3.6+ rewrite to: with os.scandir(pdir) as dirlist:
- dirlist = os.scandir(path=pdir)
- for ff in dirlist:
- if self.isexec(ff):
- fname = ff.name
- try:
- # Removes dots from the name since ipython
- # will assume names with dots to be python.
- if not self.shell.alias_manager.is_alias(fname):
- self.shell.alias_manager.define_alias(
- fname.replace('.',''), fname)
- except InvalidAliasError:
- pass
- else:
- syscmdlist.append(fname)
- else:
- no_alias = Alias.blacklist
- for pdir in path:
- try:
- os.chdir(pdir)
- except OSError:
- continue
-
- # for python 3.6+ rewrite to: with os.scandir(pdir) as dirlist:
- dirlist = os.scandir(pdir)
- for ff in dirlist:
- fname = ff.name
- base, ext = os.path.splitext(fname)
- if self.isexec(ff) and base.lower() not in no_alias:
- if ext.lower() == '.exe':
- fname = base
- try:
- # Removes dots from the name since ipython
- # will assume names with dots to be python.
- self.shell.alias_manager.define_alias(
- base.lower().replace('.',''), fname)
- except InvalidAliasError:
- pass
- syscmdlist.append(fname)
-
- self.shell.db['syscmdlist'] = syscmdlist
- finally:
- os.chdir(savedir)
-
- @skip_doctest
- @line_magic
- def pwd(self, parameter_s=''):
- """Return the current working directory path.
-
- Examples
- --------
- ::
-
- In [9]: pwd
- Out[9]: '/home/tsuser/sprint/ipython'
- """
- try:
- return os.getcwd()
- except FileNotFoundError as e:
- raise UsageError("CWD no longer exists - please use %cd to change directory.") from e
-
- @skip_doctest
- @line_magic
- def cd(self, parameter_s=''):
- """Change the current working directory.
-
- This command automatically maintains an internal list of directories
- you visit during your IPython session, in the variable ``_dh``. The
- command :magic:`%dhist` shows this history nicely formatted. You can
- also do ``cd -`` to see directory history conveniently.
- Usage:
-
- - ``cd 'dir'``: changes to directory 'dir'.
- - ``cd -``: changes to the last visited directory.
- - ``cd -``: changes to the n-th directory in the directory history.
- - ``cd --foo``: change to directory that matches 'foo' in history
- - ``cd -b ``: jump to a bookmark set by %bookmark
- - Hitting a tab key after ``cd -b`` allows you to tab-complete
- bookmark names.
-
- .. note::
- ``cd `` is enough if there is no directory
- ````, but a bookmark with the name exists.
-
- Options:
-
- -q Be quiet. Do not print the working directory after the
- cd command is executed. By default IPython's cd
- command does print this directory, since the default
- prompts do not display path information.
-
- .. note::
- Note that ``!cd`` doesn't work for this purpose because the shell
- where ``!command`` runs is immediately discarded after executing
- 'command'.
-
- Examples
- --------
- ::
-
- In [10]: cd parent/child
- /home/tsuser/parent/child
- """
-
- try:
- oldcwd = os.getcwd()
- except FileNotFoundError:
- # Happens if the CWD has been deleted.
- oldcwd = None
-
- numcd = re.match(r'(-)(\d+)$',parameter_s)
- # jump in directory history by number
- if numcd:
- nn = int(numcd.group(2))
- try:
- ps = self.shell.user_ns['_dh'][nn]
- except IndexError:
- print('The requested directory does not exist in history.')
- return
- else:
- opts = {}
- elif parameter_s.startswith('--'):
- ps = None
- fallback = None
- pat = parameter_s[2:]
- dh = self.shell.user_ns['_dh']
- # first search only by basename (last component)
- for ent in reversed(dh):
- if pat in os.path.basename(ent) and os.path.isdir(ent):
- ps = ent
- break
-
- if fallback is None and pat in ent and os.path.isdir(ent):
- fallback = ent
-
- # if we have no last part match, pick the first full path match
- if ps is None:
- ps = fallback
-
- if ps is None:
- print("No matching entry in directory history")
- return
- else:
- opts = {}
-
-
- else:
- opts, ps = self.parse_options(parameter_s, 'qb', mode='string')
- # jump to previous
- if ps == '-':
- try:
- ps = self.shell.user_ns['_dh'][-2]
- except IndexError as e:
- raise UsageError('%cd -: No previous directory to change to.') from e
- # jump to bookmark if needed
- else:
- if not os.path.isdir(ps) or 'b' in opts:
- bkms = self.shell.db.get('bookmarks', {})
-
- if ps in bkms:
- target = bkms[ps]
- print('(bookmark:%s) -> %s' % (ps, target))
- ps = target
- else:
- if 'b' in opts:
- raise UsageError("Bookmark '%s' not found. "
- "Use '%%bookmark -l' to see your bookmarks." % ps)
-
- # at this point ps should point to the target dir
- if ps:
- try:
- os.chdir(os.path.expanduser(ps))
- if hasattr(self.shell, 'term_title') and self.shell.term_title:
- set_term_title(self.shell.term_title_format.format(cwd=abbrev_cwd()))
- except OSError:
- print(sys.exc_info()[1])
- else:
- cwd = pathlib.Path.cwd()
- dhist = self.shell.user_ns['_dh']
- if oldcwd != cwd:
- dhist.append(cwd)
- self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
-
- else:
- os.chdir(self.shell.home_dir)
- if hasattr(self.shell, 'term_title') and self.shell.term_title:
- set_term_title(self.shell.term_title_format.format(cwd="~"))
- cwd = pathlib.Path.cwd()
- dhist = self.shell.user_ns['_dh']
-
- if oldcwd != cwd:
- dhist.append(cwd)
- self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
- if not 'q' in opts and not self.cd_force_quiet and self.shell.user_ns['_dh']:
- print(self.shell.user_ns['_dh'][-1])
-
- @line_magic
- def env(self, parameter_s=''):
- """Get, set, or list environment variables.
-
- Usage:\\
-
- :``%env``: lists all environment variables/values
- :``%env var``: get value for var
- :``%env var val``: set value for var
- :``%env var=val``: set value for var
- :``%env var=$val``: set value for var, using python expansion if possible
- """
- if parameter_s.strip():
- split = '=' if '=' in parameter_s else ' '
- bits = parameter_s.split(split)
- if len(bits) == 1:
- key = parameter_s.strip()
- if key in os.environ:
- return os.environ[key]
- else:
- err = "Environment does not have key: {0}".format(key)
- raise UsageError(err)
- if len(bits) > 1:
- return self.set_env(parameter_s)
- env = dict(os.environ)
- # hide likely secrets when printing the whole environment
- for key in list(env):
- if any(s in key.lower() for s in ('key', 'token', 'secret')):
- env[key] = ''
-
- return env
-
- @line_magic
- def set_env(self, parameter_s):
- """Set environment variables. Assumptions are that either "val" is a
- name in the user namespace, or val is something that evaluates to a
- string.
-
- Usage:\\
- :``%set_env var val``: set value for var
- :``%set_env var=val``: set value for var
- :``%set_env var=$val``: set value for var, using python expansion if possible
- """
- split = '=' if '=' in parameter_s else ' '
- bits = parameter_s.split(split, 1)
- if not parameter_s.strip() or len(bits)<2:
- raise UsageError("usage is 'set_env var=val'")
- var = bits[0].strip()
- val = bits[1].strip()
- if re.match(r'.*\s.*', var):
- # an environment variable with whitespace is almost certainly
- # not what the user intended. what's more likely is the wrong
- # split was chosen, ie for "set_env cmd_args A=B", we chose
- # '=' for the split and should have chosen ' '. to get around
- # this, users should just assign directly to os.environ or use
- # standard magic {var} expansion.
- err = "refusing to set env var with whitespace: '{0}'"
- err = err.format(val)
- raise UsageError(err)
- os.environ[var] = val
- print('env: {0}={1}'.format(var,val))
-
- @line_magic
- def pushd(self, parameter_s=''):
- """Place the current dir on stack and change directory.
-
- Usage:\\
- %pushd ['dirname']
- """
-
- dir_s = self.shell.dir_stack
- tgt = os.path.expanduser(parameter_s)
- cwd = os.getcwd().replace(self.shell.home_dir,'~')
- if tgt:
- self.cd(parameter_s)
- dir_s.insert(0,cwd)
- return self.shell.run_line_magic('dirs', '')
-
- @line_magic
- def popd(self, parameter_s=''):
- """Change to directory popped off the top of the stack.
- """
- if not self.shell.dir_stack:
- raise UsageError("%popd on empty stack")
- top = self.shell.dir_stack.pop(0)
- self.cd(top)
- print("popd ->",top)
-
- @line_magic
- def dirs(self, parameter_s=''):
- """Return the current directory stack."""
-
- return self.shell.dir_stack
-
- @line_magic
- def dhist(self, parameter_s=''):
- """Print your history of visited directories.
-
- %dhist -> print full history\\
- %dhist n -> print last n entries only\\
- %dhist n1 n2 -> print entries between n1 and n2 (n2 not included)\\
-
- This history is automatically maintained by the %cd command, and
- always available as the global list variable _dh. You can use %cd -
- to go to directory number .
-
- Note that most of time, you should view directory history by entering
- cd -.
-
- """
-
- dh = self.shell.user_ns['_dh']
- if parameter_s:
- try:
- args = map(int,parameter_s.split())
- except:
- self.arg_err(self.dhist)
- return
- if len(args) == 1:
- ini,fin = max(len(dh)-(args[0]),0),len(dh)
- elif len(args) == 2:
- ini,fin = args
- fin = min(fin, len(dh))
- else:
- self.arg_err(self.dhist)
- return
- else:
- ini,fin = 0,len(dh)
- print('Directory history (kept in _dh)')
- for i in range(ini, fin):
- print("%d: %s" % (i, dh[i]))
-
- @skip_doctest
- @line_magic
- def sc(self, parameter_s=''):
- """Shell capture - run shell command and capture output (DEPRECATED use !).
-
- DEPRECATED. Suboptimal, retained for backwards compatibility.
-
- You should use the form 'var = !command' instead. Example:
-
- "%sc -l myfiles = ls ~" should now be written as
-
- "myfiles = !ls ~"
-
- myfiles.s, myfiles.l and myfiles.n still apply as documented
- below.
-
- --
- %sc [options] varname=command
-
- IPython will run the given command using commands.getoutput(), and
- will then update the user's interactive namespace with a variable
- called varname, containing the value of the call. Your command can
- contain shell wildcards, pipes, etc.
-
- The '=' sign in the syntax is mandatory, and the variable name you
- supply must follow Python's standard conventions for valid names.
-
- (A special format without variable name exists for internal use)
-
- Options:
-
- -l: list output. Split the output on newlines into a list before
- assigning it to the given variable. By default the output is stored
- as a single string.
-
- -v: verbose. Print the contents of the variable.
-
- In most cases you should not need to split as a list, because the
- returned value is a special type of string which can automatically
- provide its contents either as a list (split on newlines) or as a
- space-separated string. These are convenient, respectively, either
- for sequential processing or to be passed to a shell command.
-
- For example::
-
- # Capture into variable a
- In [1]: sc a=ls *py
-
- # a is a string with embedded newlines
- In [2]: a
- Out[2]: 'setup.py\\nwin32_manual_post_install.py'
-
- # which can be seen as a list:
- In [3]: a.l
- Out[3]: ['setup.py', 'win32_manual_post_install.py']
-
- # or as a whitespace-separated string:
- In [4]: a.s
- Out[4]: 'setup.py win32_manual_post_install.py'
-
- # a.s is useful to pass as a single command line:
- In [5]: !wc -l $a.s
- 146 setup.py
- 130 win32_manual_post_install.py
- 276 total
-
- # while the list form is useful to loop over:
- In [6]: for f in a.l:
- ...: !wc -l $f
- ...:
- 146 setup.py
- 130 win32_manual_post_install.py
-
- Similarly, the lists returned by the -l option are also special, in
- the sense that you can equally invoke the .s attribute on them to
- automatically get a whitespace-separated string from their contents::
-
- In [7]: sc -l b=ls *py
-
- In [8]: b
- Out[8]: ['setup.py', 'win32_manual_post_install.py']
-
- In [9]: b.s
- Out[9]: 'setup.py win32_manual_post_install.py'
-
- In summary, both the lists and strings used for output capture have
- the following special attributes::
-
- .l (or .list) : value as list.
- .n (or .nlstr): value as newline-separated string.
- .s (or .spstr): value as space-separated string.
- """
-
- opts,args = self.parse_options(parameter_s, 'lv')
- # Try to get a variable name and command to run
- try:
- # the variable name must be obtained from the parse_options
- # output, which uses shlex.split to strip options out.
- var,_ = args.split('=', 1)
- var = var.strip()
- # But the command has to be extracted from the original input
- # parameter_s, not on what parse_options returns, to avoid the
- # quote stripping which shlex.split performs on it.
- _,cmd = parameter_s.split('=', 1)
- except ValueError:
- var,cmd = '',''
- # If all looks ok, proceed
- split = 'l' in opts
- out = self.shell.getoutput(cmd, split=split)
- if 'v' in opts:
- print('%s ==\n%s' % (var, pformat(out)))
- if var:
- self.shell.user_ns.update({var:out})
- else:
- return out
-
- @line_cell_magic
- def sx(self, line='', cell=None):
- """Shell execute - run shell command and capture output (!! is short-hand).
-
- %sx command
-
- IPython will run the given command using commands.getoutput(), and
- return the result formatted as a list (split on '\\n'). Since the
- output is _returned_, it will be stored in ipython's regular output
- cache Out[N] and in the '_N' automatic variables.
-
- Notes:
-
- 1) If an input line begins with '!!', then %sx is automatically
- invoked. That is, while::
-
- !ls
-
- causes ipython to simply issue system('ls'), typing::
-
- !!ls
-
- is a shorthand equivalent to::
-
- %sx ls
-
- 2) %sx differs from %sc in that %sx automatically splits into a list,
- like '%sc -l'. The reason for this is to make it as easy as possible
- to process line-oriented shell output via further python commands.
- %sc is meant to provide much finer control, but requires more
- typing.
-
- 3) Just like %sc -l, this is a list with special attributes:
- ::
-
- .l (or .list) : value as list.
- .n (or .nlstr): value as newline-separated string.
- .s (or .spstr): value as whitespace-separated string.
-
- This is very useful when trying to use such lists as arguments to
- system commands."""
-
- if cell is None:
- # line magic
- return self.shell.getoutput(line)
- else:
- opts,args = self.parse_options(line, '', 'out=')
- output = self.shell.getoutput(cell)
- out_name = opts.get('out', opts.get('o'))
- if out_name:
- self.shell.user_ns[out_name] = output
- else:
- return output
-
- system = line_cell_magic('system')(sx)
- bang = cell_magic('!')(sx)
-
- @line_magic
- def bookmark(self, parameter_s=''):
- """Manage IPython's bookmark system.
-
- %bookmark - set bookmark to current dir
- %bookmark - set bookmark to
- %bookmark -l - list all bookmarks
- %bookmark -d - remove bookmark
- %bookmark -r - remove all bookmarks
-
- You can later on access a bookmarked folder with::
-
- %cd -b
-
- or simply '%cd ' if there is no directory called AND
- there is such a bookmark defined.
-
- Your bookmarks persist through IPython sessions, but they are
- associated with each profile."""
-
- opts,args = self.parse_options(parameter_s,'drl',mode='list')
- if len(args) > 2:
- raise UsageError("%bookmark: too many arguments")
-
- bkms = self.shell.db.get('bookmarks',{})
-
- if 'd' in opts:
- try:
- todel = args[0]
- except IndexError as e:
- raise UsageError(
- "%bookmark -d: must provide a bookmark to delete") from e
- else:
- try:
- del bkms[todel]
- except KeyError as e:
- raise UsageError(
- "%%bookmark -d: Can't delete bookmark '%s'" % todel) from e
-
- elif 'r' in opts:
- bkms = {}
- elif 'l' in opts:
- bks = sorted(bkms)
- if bks:
- size = max(map(len, bks))
- else:
- size = 0
- fmt = '%-'+str(size)+'s -> %s'
- print('Current bookmarks:')
- for bk in bks:
- print(fmt % (bk, bkms[bk]))
- else:
- if not args:
- raise UsageError("%bookmark: You must specify the bookmark name")
- elif len(args)==1:
- bkms[args[0]] = os.getcwd()
- elif len(args)==2:
- bkms[args[0]] = args[1]
- self.shell.db['bookmarks'] = bkms
-
- @line_magic
- def pycat(self, parameter_s=''):
- """Show a syntax-highlighted file through a pager.
-
- This magic is similar to the cat utility, but it will assume the file
- to be Python source and will show it with syntax highlighting.
-
- This magic command can either take a local filename, an url,
- an history range (see %history) or a macro as argument.
-
- If no parameter is given, prints out history of current session up to
- this point. ::
-
- %pycat myscript.py
- %pycat 7-27
- %pycat myMacro
- %pycat http://www.example.com/myscript.py
- """
- try:
- cont = self.shell.find_user_code(parameter_s, skip_encoding_cookie=False)
- except (ValueError, IOError):
- print("Error: no such file, variable, URL, history range or macro")
- return
-
- page.page(self.shell.pycolorize(source_to_unicode(cont)))
-
- @magic_arguments.magic_arguments()
- @magic_arguments.argument(
- '-a', '--append', action='store_true', default=False,
- help='Append contents of the cell to an existing file. '
- 'The file will be created if it does not exist.'
- )
- @magic_arguments.argument(
- 'filename', type=str,
- help='file to write'
- )
- @cell_magic
- def writefile(self, line, cell):
- """Write the contents of the cell to a file.
-
- The file will be overwritten unless the -a (--append) flag is specified.
- """
- args = magic_arguments.parse_argstring(self.writefile, line)
- if re.match(r'^(\'.*\')|(".*")$', args.filename):
- filename = os.path.expanduser(args.filename[1:-1])
- else:
- filename = os.path.expanduser(args.filename)
-
- if os.path.exists(filename):
- if args.append:
- print("Appending to %s" % filename)
- else:
- print("Overwriting %s" % filename)
- else:
- print("Writing %s" % filename)
-
- mode = 'a' if args.append else 'w'
- with io.open(filename, mode, encoding='utf-8') as f:
- f.write(cell)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/client_exceptions.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/client_exceptions.py
deleted file mode 100644
index c640e1e7fbdf8c56a9e744492d99f8ca32988142..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/client_exceptions.py
+++ /dev/null
@@ -1,342 +0,0 @@
-"""HTTP related errors."""
-
-import asyncio
-import warnings
-from typing import TYPE_CHECKING, Any, Optional, Tuple, Union
-
-from .http_parser import RawResponseMessage
-from .typedefs import LooseHeaders
-
-try:
- import ssl
-
- SSLContext = ssl.SSLContext
-except ImportError: # pragma: no cover
- ssl = SSLContext = None # type: ignore[assignment]
-
-
-if TYPE_CHECKING: # pragma: no cover
- from .client_reqrep import ClientResponse, ConnectionKey, Fingerprint, RequestInfo
-else:
- RequestInfo = ClientResponse = ConnectionKey = None
-
-__all__ = (
- "ClientError",
- "ClientConnectionError",
- "ClientOSError",
- "ClientConnectorError",
- "ClientProxyConnectionError",
- "ClientSSLError",
- "ClientConnectorSSLError",
- "ClientConnectorCertificateError",
- "ServerConnectionError",
- "ServerTimeoutError",
- "ServerDisconnectedError",
- "ServerFingerprintMismatch",
- "ClientResponseError",
- "ClientHttpProxyError",
- "WSServerHandshakeError",
- "ContentTypeError",
- "ClientPayloadError",
- "InvalidURL",
-)
-
-
-class ClientError(Exception):
- """Base class for client connection errors."""
-
-
-class ClientResponseError(ClientError):
- """Connection error during reading response.
-
- request_info: instance of RequestInfo
- """
-
- def __init__(
- self,
- request_info: RequestInfo,
- history: Tuple[ClientResponse, ...],
- *,
- code: Optional[int] = None,
- status: Optional[int] = None,
- message: str = "",
- headers: Optional[LooseHeaders] = None,
- ) -> None:
- self.request_info = request_info
- if code is not None:
- if status is not None:
- raise ValueError(
- "Both code and status arguments are provided; "
- "code is deprecated, use status instead"
- )
- warnings.warn(
- "code argument is deprecated, use status instead",
- DeprecationWarning,
- stacklevel=2,
- )
- if status is not None:
- self.status = status
- elif code is not None:
- self.status = code
- else:
- self.status = 0
- self.message = message
- self.headers = headers
- self.history = history
- self.args = (request_info, history)
-
- def __str__(self) -> str:
- return "{}, message={!r}, url={!r}".format(
- self.status,
- self.message,
- self.request_info.real_url,
- )
-
- def __repr__(self) -> str:
- args = f"{self.request_info!r}, {self.history!r}"
- if self.status != 0:
- args += f", status={self.status!r}"
- if self.message != "":
- args += f", message={self.message!r}"
- if self.headers is not None:
- args += f", headers={self.headers!r}"
- return f"{type(self).__name__}({args})"
-
- @property
- def code(self) -> int:
- warnings.warn(
- "code property is deprecated, use status instead",
- DeprecationWarning,
- stacklevel=2,
- )
- return self.status
-
- @code.setter
- def code(self, value: int) -> None:
- warnings.warn(
- "code property is deprecated, use status instead",
- DeprecationWarning,
- stacklevel=2,
- )
- self.status = value
-
-
-class ContentTypeError(ClientResponseError):
- """ContentType found is not valid."""
-
-
-class WSServerHandshakeError(ClientResponseError):
- """websocket server handshake error."""
-
-
-class ClientHttpProxyError(ClientResponseError):
- """HTTP proxy error.
-
- Raised in :class:`aiohttp.connector.TCPConnector` if
- proxy responds with status other than ``200 OK``
- on ``CONNECT`` request.
- """
-
-
-class TooManyRedirects(ClientResponseError):
- """Client was redirected too many times."""
-
-
-class ClientConnectionError(ClientError):
- """Base class for client socket errors."""
-
-
-class ClientOSError(ClientConnectionError, OSError):
- """OSError error."""
-
-
-class ClientConnectorError(ClientOSError):
- """Client connector error.
-
- Raised in :class:`aiohttp.connector.TCPConnector` if
- a connection can not be established.
- """
-
- def __init__(self, connection_key: ConnectionKey, os_error: OSError) -> None:
- self._conn_key = connection_key
- self._os_error = os_error
- super().__init__(os_error.errno, os_error.strerror)
- self.args = (connection_key, os_error)
-
- @property
- def os_error(self) -> OSError:
- return self._os_error
-
- @property
- def host(self) -> str:
- return self._conn_key.host
-
- @property
- def port(self) -> Optional[int]:
- return self._conn_key.port
-
- @property
- def ssl(self) -> Union[SSLContext, None, bool, "Fingerprint"]:
- return self._conn_key.ssl
-
- def __str__(self) -> str:
- return "Cannot connect to host {0.host}:{0.port} ssl:{1} [{2}]".format(
- self, self.ssl if self.ssl is not None else "default", self.strerror
- )
-
- # OSError.__reduce__ does too much black magick
- __reduce__ = BaseException.__reduce__
-
-
-class ClientProxyConnectionError(ClientConnectorError):
- """Proxy connection error.
-
- Raised in :class:`aiohttp.connector.TCPConnector` if
- connection to proxy can not be established.
- """
-
-
-class UnixClientConnectorError(ClientConnectorError):
- """Unix connector error.
-
- Raised in :py:class:`aiohttp.connector.UnixConnector`
- if connection to unix socket can not be established.
- """
-
- def __init__(
- self, path: str, connection_key: ConnectionKey, os_error: OSError
- ) -> None:
- self._path = path
- super().__init__(connection_key, os_error)
-
- @property
- def path(self) -> str:
- return self._path
-
- def __str__(self) -> str:
- return "Cannot connect to unix socket {0.path} ssl:{1} [{2}]".format(
- self, self.ssl if self.ssl is not None else "default", self.strerror
- )
-
-
-class ServerConnectionError(ClientConnectionError):
- """Server connection errors."""
-
-
-class ServerDisconnectedError(ServerConnectionError):
- """Server disconnected."""
-
- def __init__(self, message: Union[RawResponseMessage, str, None] = None) -> None:
- if message is None:
- message = "Server disconnected"
-
- self.args = (message,)
- self.message = message
-
-
-class ServerTimeoutError(ServerConnectionError, asyncio.TimeoutError):
- """Server timeout error."""
-
-
-class ServerFingerprintMismatch(ServerConnectionError):
- """SSL certificate does not match expected fingerprint."""
-
- def __init__(self, expected: bytes, got: bytes, host: str, port: int) -> None:
- self.expected = expected
- self.got = got
- self.host = host
- self.port = port
- self.args = (expected, got, host, port)
-
- def __repr__(self) -> str:
- return "<{} expected={!r} got={!r} host={!r} port={!r}>".format(
- self.__class__.__name__, self.expected, self.got, self.host, self.port
- )
-
-
-class ClientPayloadError(ClientError):
- """Response payload error."""
-
-
-class InvalidURL(ClientError, ValueError):
- """Invalid URL.
-
- URL used for fetching is malformed, e.g. it doesn't contains host
- part.
- """
-
- # Derive from ValueError for backward compatibility
-
- def __init__(self, url: Any) -> None:
- # The type of url is not yarl.URL because the exception can be raised
- # on URL(url) call
- super().__init__(url)
-
- @property
- def url(self) -> Any:
- return self.args[0]
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} {self.url}>"
-
-
-class ClientSSLError(ClientConnectorError):
- """Base error for ssl.*Errors."""
-
-
-if ssl is not None:
- cert_errors = (ssl.CertificateError,)
- cert_errors_bases = (
- ClientSSLError,
- ssl.CertificateError,
- )
-
- ssl_errors = (ssl.SSLError,)
- ssl_error_bases = (ClientSSLError, ssl.SSLError)
-else: # pragma: no cover
- cert_errors = tuple()
- cert_errors_bases = (
- ClientSSLError,
- ValueError,
- )
-
- ssl_errors = tuple()
- ssl_error_bases = (ClientSSLError,)
-
-
-class ClientConnectorSSLError(*ssl_error_bases): # type: ignore[misc]
- """Response ssl error."""
-
-
-class ClientConnectorCertificateError(*cert_errors_bases): # type: ignore[misc]
- """Response certificate error."""
-
- def __init__(
- self, connection_key: ConnectionKey, certificate_error: Exception
- ) -> None:
- self._conn_key = connection_key
- self._certificate_error = certificate_error
- self.args = (connection_key, certificate_error)
-
- @property
- def certificate_error(self) -> Exception:
- return self._certificate_error
-
- @property
- def host(self) -> str:
- return self._conn_key.host
-
- @property
- def port(self) -> Optional[int]:
- return self._conn_key.port
-
- @property
- def ssl(self) -> bool:
- return self._conn_key.is_ssl
-
- def __str__(self) -> str:
- return (
- "Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} "
- "[{0.certificate_error.__class__.__name__}: "
- "{0.certificate_error.args}]".format(self)
- )
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/save.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/save.py
deleted file mode 100644
index 90d36f14bc5ebf5cb1e07cb469191ed21e4b3f4b..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/save.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import json
-import pathlib
-import warnings
-
-from .mimebundle import spec_to_mimebundle
-from ..vegalite.v5.data import data_transformers
-
-
-def write_file_or_filename(fp, content, mode="w", encoding=None):
- """Write content to fp, whether fp is a string, a pathlib Path or a
- file-like object"""
- if isinstance(fp, str) or isinstance(fp, pathlib.PurePath):
- with open(file=fp, mode=mode, encoding=encoding) as f:
- f.write(content)
- else:
- fp.write(content)
-
-
-def set_inspect_format_argument(format, fp, inline):
- """Inspect the format argument in the save function"""
- if format is None:
- if isinstance(fp, str):
- format = fp.split(".")[-1]
- elif isinstance(fp, pathlib.PurePath):
- format = fp.suffix.lstrip(".")
- else:
- raise ValueError(
- "must specify file format: "
- "['png', 'svg', 'pdf', 'html', 'json', 'vega']"
- )
-
- if format != "html" and inline:
- warnings.warn("inline argument ignored for non HTML formats.", stacklevel=1)
-
- return format
-
-
-def set_inspect_mode_argument(mode, embed_options, spec, vegalite_version):
- """Inspect the mode argument in the save function"""
- if mode is None:
- if "mode" in embed_options:
- mode = embed_options["mode"]
- elif "$schema" in spec:
- mode = spec["$schema"].split("/")[-2]
- else:
- mode = "vega-lite"
-
- if mode != "vega-lite":
- raise ValueError("mode must be 'vega-lite', " "not '{}'".format(mode))
-
- if mode == "vega-lite" and vegalite_version is None:
- raise ValueError("must specify vega-lite version")
-
- return mode
-
-
-def save(
- chart,
- fp,
- vega_version,
- vegaembed_version,
- format=None,
- mode=None,
- vegalite_version=None,
- embed_options=None,
- json_kwds=None,
- webdriver=None,
- scale_factor=1,
- engine=None,
- inline=False,
- **kwargs,
-):
- """Save a chart to file in a variety of formats
-
- Supported formats are [json, html, png, svg, pdf]
-
- Parameters
- ----------
- chart : alt.Chart
- the chart instance to save
- fp : string filename, pathlib.Path or file-like object
- file to which to write the chart.
- format : string (optional)
- the format to write: one of ['json', 'html', 'png', 'svg', 'pdf'].
- If not specified, the format will be determined from the filename.
- mode : string (optional)
- Must be 'vega-lite'. If not specified, then infer the mode from
- the '$schema' property of the spec, or the ``opt`` dictionary.
- If it's not specified in either of those places, then use 'vega-lite'.
- vega_version : string (optional)
- For html output, the version of vega.js to use
- vegalite_version : string (optional)
- For html output, the version of vegalite.js to use
- vegaembed_version : string (optional)
- For html output, the version of vegaembed.js to use
- embed_options : dict (optional)
- The vegaEmbed options dictionary. Default is {}
- (See https://github.com/vega/vega-embed for details)
- json_kwds : dict (optional)
- Additional keyword arguments are passed to the output method
- associated with the specified format.
- webdriver : string {'chrome' | 'firefox'} (optional)
- Webdriver to use for png or svg output
- scale_factor : float (optional)
- scale_factor to use to change size/resolution of png or svg output
- engine: string {'vl-convert', 'altair_saver'}
- the conversion engine to use for 'png', 'svg', and 'pdf' formats
- inline: bool (optional)
- If False (default), the required JavaScript libraries are loaded
- from a CDN location in the resulting html file.
- If True, the required JavaScript libraries are inlined into the resulting
- html file so that it will work without an internet connection.
- The altair_viewer package is required if True.
- **kwargs :
- additional kwargs passed to spec_to_mimebundle.
- """
- if json_kwds is None:
- json_kwds = {}
-
- if embed_options is None:
- embed_options = {}
-
- format = set_inspect_format_argument(format, fp, inline)
-
- # Temporarily turn off any data transformers so that all data is inlined
- # when calling chart.to_dict. This is relevant for vl-convert which cannot access
- # local json files which could be created by a json data transformer. Furthermore,
- # we don't exit the with statement until this function completed due to the issue
- # described at https://github.com/vega/vl-convert/issues/31
- with data_transformers.enable("default"), data_transformers.disable_max_rows():
- spec = chart.to_dict()
-
- mode = set_inspect_mode_argument(mode, embed_options, spec, vegalite_version)
-
- if format == "json":
- json_spec = json.dumps(spec, **json_kwds)
- write_file_or_filename(fp, json_spec, mode="w")
- elif format == "html":
- if inline:
- kwargs["template"] = "inline"
- mimebundle = spec_to_mimebundle(
- spec=spec,
- format=format,
- mode=mode,
- vega_version=vega_version,
- vegalite_version=vegalite_version,
- vegaembed_version=vegaembed_version,
- embed_options=embed_options,
- json_kwds=json_kwds,
- **kwargs,
- )
- write_file_or_filename(fp, mimebundle["text/html"], mode="w")
- elif format in ["png", "svg", "pdf", "vega"]:
- mimebundle = spec_to_mimebundle(
- spec=spec,
- format=format,
- mode=mode,
- vega_version=vega_version,
- vegalite_version=vegalite_version,
- vegaembed_version=vegaembed_version,
- webdriver=webdriver,
- scale_factor=scale_factor,
- engine=engine,
- **kwargs,
- )
- if format == "png":
- write_file_or_filename(fp, mimebundle["image/png"], mode="wb")
- elif format == "pdf":
- write_file_or_filename(fp, mimebundle["application/pdf"], mode="wb")
- else:
- encoding = kwargs.get("encoding", "utf-8")
- write_file_or_filename(
- fp, mimebundle["image/svg+xml"], mode="w", encoding=encoding
- )
- else:
- raise ValueError("Unsupported format: '{}'".format(format))
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/pytest_plugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/pytest_plugin.py
deleted file mode 100644
index 044ce6914dd70a200cbc90cbbb9abc9135a66340..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/pytest_plugin.py
+++ /dev/null
@@ -1,142 +0,0 @@
-from __future__ import annotations
-
-from contextlib import contextmanager
-from inspect import isasyncgenfunction, iscoroutinefunction
-from typing import Any, Dict, Generator, Tuple, cast
-
-import pytest
-import sniffio
-
-from ._core._eventloop import get_all_backends, get_asynclib
-from .abc import TestRunner
-
-_current_runner: TestRunner | None = None
-
-
-def extract_backend_and_options(backend: object) -> tuple[str, dict[str, Any]]:
- if isinstance(backend, str):
- return backend, {}
- elif isinstance(backend, tuple) and len(backend) == 2:
- if isinstance(backend[0], str) and isinstance(backend[1], dict):
- return cast(Tuple[str, Dict[str, Any]], backend)
-
- raise TypeError("anyio_backend must be either a string or tuple of (string, dict)")
-
-
-@contextmanager
-def get_runner(
- backend_name: str, backend_options: dict[str, Any]
-) -> Generator[TestRunner, object, None]:
- global _current_runner
- if _current_runner:
- yield _current_runner
- return
-
- asynclib = get_asynclib(backend_name)
- token = None
- if sniffio.current_async_library_cvar.get(None) is None:
- # Since we're in control of the event loop, we can cache the name of the async library
- token = sniffio.current_async_library_cvar.set(backend_name)
-
- try:
- backend_options = backend_options or {}
- with asynclib.TestRunner(**backend_options) as runner:
- _current_runner = runner
- yield runner
- finally:
- _current_runner = None
- if token:
- sniffio.current_async_library_cvar.reset(token)
-
-
-def pytest_configure(config: Any) -> None:
- config.addinivalue_line(
- "markers",
- "anyio: mark the (coroutine function) test to be run "
- "asynchronously via anyio.",
- )
-
-
-def pytest_fixture_setup(fixturedef: Any, request: Any) -> None:
- def wrapper(*args, anyio_backend, **kwargs): # type: ignore[no-untyped-def]
- backend_name, backend_options = extract_backend_and_options(anyio_backend)
- if has_backend_arg:
- kwargs["anyio_backend"] = anyio_backend
-
- with get_runner(backend_name, backend_options) as runner:
- if isasyncgenfunction(func):
- yield from runner.run_asyncgen_fixture(func, kwargs)
- else:
- yield runner.run_fixture(func, kwargs)
-
- # Only apply this to coroutine functions and async generator functions in requests that involve
- # the anyio_backend fixture
- func = fixturedef.func
- if isasyncgenfunction(func) or iscoroutinefunction(func):
- if "anyio_backend" in request.fixturenames:
- has_backend_arg = "anyio_backend" in fixturedef.argnames
- fixturedef.func = wrapper
- if not has_backend_arg:
- fixturedef.argnames += ("anyio_backend",)
-
-
-@pytest.hookimpl(tryfirst=True)
-def pytest_pycollect_makeitem(collector: Any, name: Any, obj: Any) -> None:
- if collector.istestfunction(obj, name):
- inner_func = obj.hypothesis.inner_test if hasattr(obj, "hypothesis") else obj
- if iscoroutinefunction(inner_func):
- marker = collector.get_closest_marker("anyio")
- own_markers = getattr(obj, "pytestmark", ())
- if marker or any(marker.name == "anyio" for marker in own_markers):
- pytest.mark.usefixtures("anyio_backend")(obj)
-
-
-@pytest.hookimpl(tryfirst=True)
-def pytest_pyfunc_call(pyfuncitem: Any) -> bool | None:
- def run_with_hypothesis(**kwargs: Any) -> None:
- with get_runner(backend_name, backend_options) as runner:
- runner.run_test(original_func, kwargs)
-
- backend = pyfuncitem.funcargs.get("anyio_backend")
- if backend:
- backend_name, backend_options = extract_backend_and_options(backend)
-
- if hasattr(pyfuncitem.obj, "hypothesis"):
- # Wrap the inner test function unless it's already wrapped
- original_func = pyfuncitem.obj.hypothesis.inner_test
- if original_func.__qualname__ != run_with_hypothesis.__qualname__:
- if iscoroutinefunction(original_func):
- pyfuncitem.obj.hypothesis.inner_test = run_with_hypothesis
-
- return None
-
- if iscoroutinefunction(pyfuncitem.obj):
- funcargs = pyfuncitem.funcargs
- testargs = {arg: funcargs[arg] for arg in pyfuncitem._fixtureinfo.argnames}
- with get_runner(backend_name, backend_options) as runner:
- runner.run_test(pyfuncitem.obj, testargs)
-
- return True
-
- return None
-
-
-@pytest.fixture(params=get_all_backends())
-def anyio_backend(request: Any) -> Any:
- return request.param
-
-
-@pytest.fixture
-def anyio_backend_name(anyio_backend: Any) -> str:
- if isinstance(anyio_backend, str):
- return anyio_backend
- else:
- return anyio_backend[0]
-
-
-@pytest.fixture
-def anyio_backend_options(anyio_backend: Any) -> dict[str, Any]:
- if isinstance(anyio_backend, str):
- return {}
- else:
- return anyio_backend[1]
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backcall/backcall.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backcall/backcall.py
deleted file mode 100644
index fe1fdb5470b04f2e1053b63fa8c365cae0ea1281..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backcall/backcall.py
+++ /dev/null
@@ -1,109 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Created on Mon Jan 13 18:17:15 2014
-
-@author: takluyver
-"""
-import sys
-PY3 = (sys.version_info[0] >= 3)
-
-try:
- from inspect import signature, Parameter # Python >= 3.3
-except ImportError:
- from ._signatures import signature, Parameter
-
-if PY3:
- from functools import wraps
-else:
- from functools import wraps as _wraps
- def wraps(f):
- def dec(func):
- _wraps(f)(func)
- func.__wrapped__ = f
- return func
-
- return dec
-
-def callback_prototype(prototype):
- """Decorator to process a callback prototype.
-
- A callback prototype is a function whose signature includes all the values
- that will be passed by the callback API in question.
-
- The original function will be returned, with a ``prototype.adapt`` attribute
- which can be used to prepare third party callbacks.
- """
- protosig = signature(prototype)
- positional, keyword = [], []
- for name, param in protosig.parameters.items():
- if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
- raise TypeError("*args/**kwargs not supported in prototypes")
-
- if (param.default is not Parameter.empty) \
- or (param.kind == Parameter.KEYWORD_ONLY):
- keyword.append(name)
- else:
- positional.append(name)
-
- kwargs = dict.fromkeys(keyword)
- def adapt(callback):
- """Introspect and prepare a third party callback."""
- sig = signature(callback)
- try:
- # XXX: callback can have extra optional parameters - OK?
- sig.bind(*positional, **kwargs)
- return callback
- except TypeError:
- pass
-
- # Match up arguments
- unmatched_pos = positional[:]
- unmatched_kw = kwargs.copy()
- unrecognised = []
- # TODO: unrecognised parameters with default values - OK?
- for name, param in sig.parameters.items():
- # print(name, param.kind) #DBG
- if param.kind == Parameter.POSITIONAL_ONLY:
- if len(unmatched_pos) > 0:
- unmatched_pos.pop(0)
- else:
- unrecognised.append(name)
- elif param.kind == Parameter.POSITIONAL_OR_KEYWORD:
- if (param.default is not Parameter.empty) and (name in unmatched_kw):
- unmatched_kw.pop(name)
- elif len(unmatched_pos) > 0:
- unmatched_pos.pop(0)
- else:
- unrecognised.append(name)
- elif param.kind == Parameter.VAR_POSITIONAL:
- unmatched_pos = []
- elif param.kind == Parameter.KEYWORD_ONLY:
- if name in unmatched_kw:
- unmatched_kw.pop(name)
- else:
- unrecognised.append(name)
- else: # VAR_KEYWORD
- unmatched_kw = {}
-
- # print(unmatched_pos, unmatched_kw, unrecognised) #DBG
-
- if unrecognised:
- raise TypeError("Function {!r} had unmatched arguments: {}".format(callback, unrecognised))
-
- n_positional = len(positional) - len(unmatched_pos)
-
- @wraps(callback)
- def adapted(*args, **kwargs):
- """Wrapper for third party callbacks that discards excess arguments"""
-# print(args, kwargs)
- args = args[:n_positional]
- for name in unmatched_kw:
- # XXX: Could name not be in kwargs?
- kwargs.pop(name)
-# print(args, kwargs, unmatched_pos, cut_positional, unmatched_kw)
- return callback(*args, **kwargs)
-
- return adapted
-
- prototype.adapt = adapt
- return prototype
\ No newline at end of file
diff --git a/spaces/Suniilkumaar/SwapMukham/face_parsing/parse_mask.py b/spaces/Suniilkumaar/SwapMukham/face_parsing/parse_mask.py
deleted file mode 100644
index 0be62f879ba778d9c048fcb70b782f59012ec34b..0000000000000000000000000000000000000000
--- a/spaces/Suniilkumaar/SwapMukham/face_parsing/parse_mask.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import cv2
-import torch
-import torchvision
-import numpy as np
-import torch.nn as nn
-from PIL import Image
-from tqdm import tqdm
-import torch.nn.functional as F
-import torchvision.transforms as transforms
-
-from . model import BiSeNet
-
-class SoftErosion(nn.Module):
- def __init__(self, kernel_size=15, threshold=0.6, iterations=1):
- super(SoftErosion, self).__init__()
- r = kernel_size // 2
- self.padding = r
- self.iterations = iterations
- self.threshold = threshold
-
- # Create kernel
- y_indices, x_indices = torch.meshgrid(torch.arange(0., kernel_size), torch.arange(0., kernel_size))
- dist = torch.sqrt((x_indices - r) ** 2 + (y_indices - r) ** 2)
- kernel = dist.max() - dist
- kernel /= kernel.sum()
- kernel = kernel.view(1, 1, *kernel.shape)
- self.register_buffer('weight', kernel)
-
- def forward(self, x):
- batch_size = x.size(0) # Get the batch size
- output = []
-
- for i in tqdm(range(batch_size), desc="Soft-Erosion", leave=False):
- input_tensor = x[i:i+1] # Take one input tensor from the batch
- input_tensor = input_tensor.float() # Convert input to float tensor
- input_tensor = input_tensor.unsqueeze(1) # Add a channel dimension
-
- for _ in range(self.iterations - 1):
- input_tensor = torch.min(input_tensor, F.conv2d(input_tensor, weight=self.weight,
- groups=input_tensor.shape[1],
- padding=self.padding))
- input_tensor = F.conv2d(input_tensor, weight=self.weight, groups=input_tensor.shape[1],
- padding=self.padding)
-
- mask = input_tensor >= self.threshold
- input_tensor[mask] = 1.0
- input_tensor[~mask] /= input_tensor[~mask].max()
-
- input_tensor = input_tensor.squeeze(1) # Remove the extra channel dimension
- output.append(input_tensor.detach().cpu().numpy())
-
- return np.array(output)
-
-transform = transforms.Compose([
- transforms.Resize((512, 512)),
- transforms.ToTensor(),
- transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
-])
-
-
-
-def init_parsing_model(model_path, device="cpu"):
- net = BiSeNet(19)
- net.to(device)
- net.load_state_dict(torch.load(model_path))
- net.eval()
- return net
-
-def transform_images(imgs):
- tensor_images = torch.stack([transform(Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))) for img in imgs], dim=0)
- return tensor_images
-
-def get_parsed_mask(net, imgs, classes=[1, 2, 3, 4, 5, 10, 11, 12, 13], device="cpu", batch_size=8, softness=20):
- if softness > 0:
- smooth_mask = SoftErosion(kernel_size=17, threshold=0.9, iterations=softness).to(device)
-
- masks = []
- for i in tqdm(range(0, len(imgs), batch_size), total=len(imgs) // batch_size, desc="Face-parsing"):
- batch_imgs = imgs[i:i + batch_size]
-
- tensor_images = transform_images(batch_imgs).to(device)
- with torch.no_grad():
- out = net(tensor_images)[0]
- # parsing = out.argmax(dim=1)
- # arget_classes = torch.tensor(classes).to(device)
- # batch_masks = torch.isin(parsing, target_classes).to(device)
- ## torch.isin was slightly slower in my test, so using np.isin
- parsing = out.argmax(dim=1).detach().cpu().numpy()
- batch_masks = np.isin(parsing, classes).astype('float32')
-
- if softness > 0:
- # batch_masks = smooth_mask(batch_masks).transpose(1,0,2,3)[0]
- mask_tensor = torch.from_numpy(batch_masks.copy()).float().to(device)
- batch_masks = smooth_mask(mask_tensor).transpose(1,0,2,3)[0]
-
- yield batch_masks
-
- #masks.append(batch_masks)
-
- #if len(masks) >= 1:
- # masks = np.concatenate(masks, axis=0)
- # masks = np.repeat(np.expand_dims(masks, axis=1), 3, axis=1)
-
- # for i, mask in enumerate(masks):
- # cv2.imwrite(f"mask/{i}.jpg", (mask * 255).astype("uint8"))
-
- #return masks
diff --git a/spaces/Superlang/ImageProcessor/annotator/openpose/body.py b/spaces/Superlang/ImageProcessor/annotator/openpose/body.py
deleted file mode 100644
index 11b10b8db047be9b88f5f0756592fdbae3d85027..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/openpose/body.py
+++ /dev/null
@@ -1,278 +0,0 @@
-import cv2
-import numpy as np
-import math
-import time
-from scipy.ndimage.filters import gaussian_filter
-import matplotlib.pyplot as plt
-import matplotlib
-import torch
-from torchvision import transforms
-from typing import NamedTuple, List, Union
-
-from . import util
-from .model import bodypose_model
-
-class Keypoint(NamedTuple):
- x: float
- y: float
- score: float = 1.0
- id: int = -1
-
-
-class BodyResult(NamedTuple):
- # Note: Using `Union` instead of `|` operator as the ladder is a Python
- # 3.10 feature.
- # Annotator code should be Python 3.8 Compatible, as controlnet repo uses
- # Python 3.8 environment.
- # https://github.com/lllyasviel/ControlNet/blob/d3284fcd0972c510635a4f5abe2eeb71dc0de524/environment.yaml#L6
- keypoints: List[Union[Keypoint, None]]
- total_score: float
- total_parts: int
-
-
-class Body(object):
- def __init__(self, model_path):
- self.model = bodypose_model()
- # if torch.cuda.is_available():
- # self.model = self.model.cuda()
- # print('cuda')
- model_dict = util.transfer(self.model, torch.load(model_path))
- self.model.load_state_dict(model_dict)
- self.model.eval()
-
- def __call__(self, oriImg):
- # scale_search = [0.5, 1.0, 1.5, 2.0]
- scale_search = [0.5]
- boxsize = 368
- stride = 8
- padValue = 128
- thre1 = 0.1
- thre2 = 0.05
- multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
- heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19))
- paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))
-
- for m in range(len(multiplier)):
- scale = multiplier[m]
- imageToTest = util.smart_resize_k(oriImg, fx=scale, fy=scale)
- imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue)
- im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5
- im = np.ascontiguousarray(im)
-
- data = torch.from_numpy(im).float()
- if torch.cuda.is_available():
- data = data.cuda()
- # data = data.permute([2, 0, 1]).unsqueeze(0).float()
- with torch.no_grad():
- data = data.to(self.cn_device)
- Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data)
- Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy()
- Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy()
-
- # extract outputs, resize, and remove padding
- # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps
- heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps
- heatmap = util.smart_resize_k(heatmap, fx=stride, fy=stride)
- heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- heatmap = util.smart_resize(heatmap, (oriImg.shape[0], oriImg.shape[1]))
-
- # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs
- paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs
- paf = util.smart_resize_k(paf, fx=stride, fy=stride)
- paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- paf = util.smart_resize(paf, (oriImg.shape[0], oriImg.shape[1]))
-
- heatmap_avg += heatmap_avg + heatmap / len(multiplier)
- paf_avg += + paf / len(multiplier)
-
- all_peaks = []
- peak_counter = 0
-
- for part in range(18):
- map_ori = heatmap_avg[:, :, part]
- one_heatmap = gaussian_filter(map_ori, sigma=3)
-
- map_left = np.zeros(one_heatmap.shape)
- map_left[1:, :] = one_heatmap[:-1, :]
- map_right = np.zeros(one_heatmap.shape)
- map_right[:-1, :] = one_heatmap[1:, :]
- map_up = np.zeros(one_heatmap.shape)
- map_up[:, 1:] = one_heatmap[:, :-1]
- map_down = np.zeros(one_heatmap.shape)
- map_down[:, :-1] = one_heatmap[:, 1:]
-
- peaks_binary = np.logical_and.reduce(
- (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1))
- peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse
- peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks]
- peak_id = range(peak_counter, peak_counter + len(peaks))
- peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))]
-
- all_peaks.append(peaks_with_score_and_id)
- peak_counter += len(peaks)
-
- # find connection in the specified sequence, center 29 is in the position 15
- limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \
- [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \
- [1, 16], [16, 18], [3, 17], [6, 18]]
- # the middle joints heatmap correpondence
- mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \
- [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \
- [55, 56], [37, 38], [45, 46]]
-
- connection_all = []
- special_k = []
- mid_num = 10
-
- for k in range(len(mapIdx)):
- score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]]
- candA = all_peaks[limbSeq[k][0] - 1]
- candB = all_peaks[limbSeq[k][1] - 1]
- nA = len(candA)
- nB = len(candB)
- indexA, indexB = limbSeq[k]
- if (nA != 0 and nB != 0):
- connection_candidate = []
- for i in range(nA):
- for j in range(nB):
- vec = np.subtract(candB[j][:2], candA[i][:2])
- norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1])
- norm = max(0.001, norm)
- vec = np.divide(vec, norm)
-
- startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \
- np.linspace(candA[i][1], candB[j][1], num=mid_num)))
-
- vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \
- for I in range(len(startend))])
- vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \
- for I in range(len(startend))])
-
- score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])
- score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min(
- 0.5 * oriImg.shape[0] / norm - 1, 0)
- criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts)
- criterion2 = score_with_dist_prior > 0
- if criterion1 and criterion2:
- connection_candidate.append(
- [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]])
-
- connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True)
- connection = np.zeros((0, 5))
- for c in range(len(connection_candidate)):
- i, j, s = connection_candidate[c][0:3]
- if (i not in connection[:, 3] and j not in connection[:, 4]):
- connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]])
- if (len(connection) >= min(nA, nB)):
- break
-
- connection_all.append(connection)
- else:
- special_k.append(k)
- connection_all.append([])
-
- # last number in each row is the total parts number of that person
- # the second last number in each row is the score of the overall configuration
- subset = -1 * np.ones((0, 20))
- candidate = np.array([item for sublist in all_peaks for item in sublist])
-
- for k in range(len(mapIdx)):
- if k not in special_k:
- partAs = connection_all[k][:, 0]
- partBs = connection_all[k][:, 1]
- indexA, indexB = np.array(limbSeq[k]) - 1
-
- for i in range(len(connection_all[k])): # = 1:size(temp,1)
- found = 0
- subset_idx = [-1, -1]
- for j in range(len(subset)): # 1:size(subset,1):
- if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:
- subset_idx[found] = j
- found += 1
-
- if found == 1:
- j = subset_idx[0]
- if subset[j][indexB] != partBs[i]:
- subset[j][indexB] = partBs[i]
- subset[j][-1] += 1
- subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
- elif found == 2: # if found 2 and disjoint, merge them
- j1, j2 = subset_idx
- membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2]
- if len(np.nonzero(membership == 2)[0]) == 0: # merge
- subset[j1][:-2] += (subset[j2][:-2] + 1)
- subset[j1][-2:] += subset[j2][-2:]
- subset[j1][-2] += connection_all[k][i][2]
- subset = np.delete(subset, j2, 0)
- else: # as like found == 1
- subset[j1][indexB] = partBs[i]
- subset[j1][-1] += 1
- subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
-
- # if find no partA in the subset, create a new subset
- elif not found and k < 17:
- row = -1 * np.ones(20)
- row[indexA] = partAs[i]
- row[indexB] = partBs[i]
- row[-1] = 2
- row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2]
- subset = np.vstack([subset, row])
- # delete some rows of subset which has few parts occur
- deleteIdx = []
- for i in range(len(subset)):
- if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4:
- deleteIdx.append(i)
- subset = np.delete(subset, deleteIdx, axis=0)
-
- # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts
- # candidate: x, y, score, id
- return candidate, subset
-
- @staticmethod
- def format_body_result(candidate: np.ndarray, subset: np.ndarray) -> List[BodyResult]:
- """
- Format the body results from the candidate and subset arrays into a list of BodyResult objects.
-
- Args:
- candidate (np.ndarray): An array of candidates containing the x, y coordinates, score, and id
- for each body part.
- subset (np.ndarray): An array of subsets containing indices to the candidate array for each
- person detected. The last two columns of each row hold the total score and total parts
- of the person.
-
- Returns:
- List[BodyResult]: A list of BodyResult objects, where each object represents a person with
- detected keypoints, total score, and total parts.
- """
- return [
- BodyResult(
- keypoints=[
- Keypoint(
- x=candidate[candidate_index][0],
- y=candidate[candidate_index][1],
- score=candidate[candidate_index][2],
- id=candidate[candidate_index][3]
- ) if candidate_index != -1 else None
- for candidate_index in person[:18].astype(int)
- ],
- total_score=person[18],
- total_parts=person[19]
- )
- for person in subset
- ]
-
-
-if __name__ == "__main__":
- body_estimation = Body('../model/body_pose_model.pth')
-
- test_image = '../images/ski.jpg'
- oriImg = cv2.imread(test_image) # B,G,R order
- candidate, subset = body_estimation(oriImg)
- bodies = body_estimation.format_body_result(candidate, subset)
-
- canvas = oriImg
- for body in bodies:
- canvas = util.draw_bodypose(canvas, body)
-
- plt.imshow(canvas[:, :, [2, 1, 0]])
- plt.show()
\ No newline at end of file
diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/train.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/train.py
deleted file mode 100644
index edfe5cd0a5d9811cff22f362936cb15d5ed504a4..0000000000000000000000000000000000000000
--- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/train.py
+++ /dev/null
@@ -1,386 +0,0 @@
-import os
-import itertools
-import argparse
-import time
-import datetime
-import yaml
-from contextlib import nullcontext
-
-
-import torch
-from torch import nn
-
-import utils
-from transformer import TransformerModel
-from utils import get_cosine_schedule_with_warmup, get_openai_lr, StoreDictKeyPair, get_weighted_single_eval_pos_sampler, get_uniform_single_eval_pos_sampler
-import priors
-import encoders
-import positional_encodings
-from utils import init_dist
-from torch.cuda.amp import autocast
-
-class Losses():
- gaussian = nn.GaussianNLLLoss(full=True, reduction='none')
- mse = nn.MSELoss(reduction='none')
- ce = lambda weight : nn.CrossEntropyLoss(reduction='none', weight=weight)
- bce = nn.BCEWithLogitsLoss(reduction='none')
-
-
-def train(priordataloader_class, criterion, encoder_generator, emsize=200, nhid=200, nlayers=6, nhead=2, dropout=0.2,
- epochs=10, steps_per_epoch=100, batch_size=200, bptt=10, lr=None, weight_decay=0.0, warmup_epochs=10, input_normalization=False,
- y_encoder_generator=None, pos_encoder_generator=None, decoder=None, extra_prior_kwargs_dict={}, scheduler=get_cosine_schedule_with_warmup,
- load_weights_from_this_state_dict=None, validation_period=10, single_eval_pos_gen=None, bptt_extra_samples=None, gpu_device='cuda:0',
- aggregate_k_gradients=1, verbose=True, style_encoder_generator=None, check_is_compatible=True, epoch_callback=None,
- initializer=None, initialize_with_model=None, train_mixed_precision=False, total_available_time_in_s=None, normalize_labels=True, **model_extra_args
- ):
- assert (epochs is None) != (total_available_time_in_s is None)
- start_of_training = time.time()
- device = gpu_device if torch.cuda.is_available() else 'cpu:0'
- print(f'Using {device} device')
- using_dist, rank, device = init_dist(device)
- bptt_sampler = (lambda : single_eval_pos_gen() + bptt_extra_samples if callable(single_eval_pos_gen) else single_eval_pos_gen + bptt_extra_samples) if bptt_extra_samples is not None else bptt
- dl = priordataloader_class(num_steps=steps_per_epoch, batch_size=batch_size, seq_len=bptt_sampler, seq_len_maximum=bptt+(bptt_extra_samples if bptt_extra_samples else 0), device=device, **extra_prior_kwargs_dict)
- if dl.fuse_x_y:
- raise Exception("Illegal parameter")
-
- encoder = encoder_generator(dl.num_features+1 if dl.fuse_x_y else dl.num_features,emsize)
- style_def = next(iter(dl))[0][0] # This is (style, x, y), target with x and y with batch size
-
- style_encoder = style_encoder_generator(hyperparameter_definitions=style_def[0], em_size=emsize) if (style_def is not None) else None
- n_out = dl.num_outputs
- if isinstance(criterion, nn.GaussianNLLLoss):
- n_out *= 2
- elif isinstance(criterion, nn.CrossEntropyLoss):
- n_out *= criterion.weight.shape[0]
- model = TransformerModel(encoder, n_out, emsize, nhead, nhid, nlayers, dropout, style_encoder=style_encoder,
- y_encoder=y_encoder_generator(dl.num_outputs, emsize), input_normalization=input_normalization,
- pos_encoder=(pos_encoder_generator or positional_encodings.NoPositionalEncoding)(emsize, bptt*2),
- decoder=decoder, init_method=initializer, **model_extra_args
- )
- model.criterion = criterion
- if load_weights_from_this_state_dict is not None:
- model.load_state_dict(load_weights_from_this_state_dict)
- if initialize_with_model is not None:
- model.init_from_small_model(initialize_with_model)
-
- print(f"Using a Transformer with {sum(p.numel() for p in model.parameters())/1000/1000:.{2}f} M parameters")
-
- try:
- for (k, v), (k2, v2) in zip(model.state_dict().items(), initialize_with_model.state_dict().items()):
- print(k, ((v - v2) / v).abs().mean(), v.shape)
- except Exception:
- pass
-
- model.to(device)
- if using_dist:
- print("Distributed training")
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[rank], output_device=rank, broadcast_buffers=False)
-
-
- # learning rate
- if lr is None:
- lr = get_openai_lr(model)
- print(f"Using OpenAI max lr of {lr}.")
- optimizer = torch.optim.AdamW(model.parameters(), lr=lr, weight_decay=weight_decay)
- scheduler = scheduler(optimizer, warmup_epochs, epochs if epochs is not None else 100) # when training for fixed time lr schedule takes 100 steps
-
- def train_step():
- model.train() # Turn on the train mode
- total_loss = 0.
- total_positional_losses = 0.
- total_positional_losses_recorded = 0
- before_get_batch = time.time()
- assert len(dl) % aggregate_k_gradients == 0, 'Please set the number of steps per epoch s.t. `aggregate_k_gradients` divides it.'
- valid_batch_steps = 0.0
- for batch, (data, targets) in enumerate(dl):
- if using_dist and not (batch % aggregate_k_gradients == aggregate_k_gradients - 1):
- cm = model.no_sync()
- #print(f'p={rank}, no_sync', force=True)
- else:
- cm = nullcontext()
- #print(f'p={rank}, sync', force=True)
- with cm:
- time_to_get_batch = time.time() - before_get_batch
- before_forward = time.time()
- if bptt_extra_samples is None:
- single_eval_pos = single_eval_pos_gen() if callable(single_eval_pos_gen) else single_eval_pos_gen
- else:
- single_eval_pos = targets.shape[0] - bptt_extra_samples
-
- is_compatible = torch.ones((targets.shape[1])).bool()
- if check_is_compatible or normalize_labels:
- for b in range(targets.shape[1]):
- targets_in_train = torch.unique(targets[:single_eval_pos, b], sorted=True)
- targets_in_eval = torch.unique(targets[single_eval_pos:, b], sorted=True)
-
- if check_is_compatible:
- is_compatible[b] = len(targets_in_train) == len(targets_in_eval) and (targets_in_train == targets_in_eval).all()
- is_compatible[b] = is_compatible[b] and len(targets_in_train) > 1
-
- # Set targets to range starting from 0 (e.g. targets 0, 2, 5, 2 will be converted to 0, 1, 2, 1)
- if normalize_labels:
- targets[:, b] = (targets[:, b] > torch.unique(targets[:, b]).unsqueeze(1)).sum(axis=0).unsqueeze(0)
- valid_batch_steps += is_compatible.float().mean()
- is_compatible = is_compatible.to(device)
- #if using_dist and check_is_compatible:
- # print('step share before reduce',curr_step_share, force=True)
- # curr_step_share = curr_step_share.to(device)
- # torch.distributed.all_reduce_multigpu([curr_step_share], op=torch.distributed.ReduceOp.SUM)
- # curr_step_share = curr_step_share.cpu() / torch.distributed.get_world_size()
- # print('step share after reduce',curr_step_share, torch.distributed.get_world_size(), force=True)
-
- # If style is set to None, it should not be transferred to device
- output = model(tuple(e.to(device) if torch.is_tensor(e) else e for e in data) if isinstance(data, tuple) else data.to(device)
- , single_eval_pos=single_eval_pos)
-
- forward_time = time.time() - before_forward
-
- #output, targets = output[:, is_compatible], targets[:, is_compatible]
-
- if single_eval_pos is not None:
- targets = targets[single_eval_pos:]
- if isinstance(criterion, nn.GaussianNLLLoss):
- assert output.shape[-1] == 2, \
- 'need to write a little bit of code to handle multiple regression targets at once'
-
- mean_pred = output[..., 0]
- var_pred = output[..., 1].abs()
- losses = criterion(mean_pred.flatten(), targets.to(device).flatten(), var=var_pred.flatten())
- elif isinstance(criterion, (nn.MSELoss, nn.BCEWithLogitsLoss)):
- losses = criterion(output.flatten(), targets.to(device).flatten())
- elif isinstance(criterion, (nn.CrossEntropyLoss)):
- #print(n_out, targets.min(), targets.max(), force=True)
- losses = criterion(output.reshape(-1, n_out), targets.to(device).long().flatten())
- else:
- losses = criterion(output.reshape(-1, n_out), targets.to(device).flatten())
- losses = losses.view(*output.shape[0:2])
- loss = losses.mean(0) @ is_compatible.float() / losses.shape[1]
- #loss = torch_nanmean(losses, axis=[0, 1]) * is_compatible.float().mean()
- # not sure whether we can go without the nan checks.
-
- loss.backward()
-
- if ((batch % aggregate_k_gradients == aggregate_k_gradients - 1) and (not check_is_compatible or using_dist))\
- or (valid_batch_steps >= aggregate_k_gradients and (check_is_compatible and not using_dist)):
- with torch.no_grad():
- for p in model.parameters():
- if p.grad is not None:
- p.grad.div_(valid_batch_steps)
- torch.nn.utils.clip_grad_norm_(model.parameters(), 1.)
- try:
- optimizer.step()
- except:
- print("Invalid optimization step encountered")
- optimizer.zero_grad()
- valid_batch_steps = 0.0
-
- step_time = time.time() - before_forward
-
- if not torch.isnan(loss):
- total_loss += loss.item()
- total_positional_losses += losses.mean(1).cpu().detach() if single_eval_pos is None else \
- nn.functional.one_hot(torch.tensor(single_eval_pos), bptt)*loss.cpu().detach()
-
- total_positional_losses_recorded += torch.ones(bptt) if single_eval_pos is None else \
- nn.functional.one_hot(torch.tensor(single_eval_pos), bptt)
-
- before_get_batch = time.time()
- return total_loss / steps_per_epoch, (
- total_positional_losses / total_positional_losses_recorded).tolist(), time_to_get_batch, forward_time, step_time
-
- best_val_loss = float("inf")
- best_model = None
- total_loss = float('inf')
- total_positional_losses = float('inf')
- try:
- for epoch in (range(1, epochs + 1) if epochs is not None else itertools.count(1)):
-
- epoch_start_time = time.time()
- if train_mixed_precision:
- with autocast():
- total_loss, total_positional_losses, time_to_get_batch, forward_time, step_time = train_step()
- else:
- total_loss, total_positional_losses, time_to_get_batch, forward_time, step_time = train_step()
- if hasattr(dl, 'validate') and epoch % validation_period == 0:
- with torch.no_grad():
- val_score = dl.validate(model)
- else:
- val_score = None
-
- if verbose:
- print('-' * 89)
- print(
- f'| end of epoch {epoch:3d} | time: {(time.time() - epoch_start_time):5.2f}s | mean loss {total_loss:5.2f} | '
- f"pos losses {','.join([f'{l:5.2f}' for l in total_positional_losses])}, lr {scheduler.get_last_lr()[0]}"
- f' data time {time_to_get_batch:5.2f} step time {step_time:5.2f}'
- f' forward time {forward_time:5.2f}' + (f'val score {val_score}' if val_score is not None else ''))
- print('-' * 89)
-
- # stepping with wallclock time based scheduler
- current_time = time.time()
- if epoch_callback is not None and rank == 0:
- epoch_callback(model, epoch / epochs if total_available_time_in_s is None else # noqa
- (current_time - start_of_training) / total_available_time_in_s # noqa
- )
- if epochs is None and (current_time - start_of_training) > total_available_time_in_s: # noqa
- break
- if epochs is None:
- scheduler.step((current_time - epoch_start_time) / total_available_time_in_s * 100)
- else:
- scheduler.step()
- except KeyboardInterrupt:
- pass
-
- return total_loss, total_positional_losses, model.to('cpu'), dl
-
-def _parse_args(config_parser, parser):
- # Do we have a config file to parse?
- args_config, remaining = config_parser.parse_known_args()
- if args_config.config:
- with open(args_config.config, 'r') as f:
- cfg = yaml.safe_load(f)
- parser.set_defaults(**cfg)
-
- # The main arg parser parses the rest of the args, the usual
- # defaults will have been overridden if config file specified.
- args = parser.parse_args(remaining)
-
- # Cache the args as a text string to save them in the output dir later
- args_text = yaml.safe_dump(args.__dict__, default_flow_style=False)
- return args, args_text
-
-
-if __name__ == '__main__':
- config_parser = argparse.ArgumentParser(description='Only used as a first parser for the config file path.')
- config_parser.add_argument('--config')
- parser = argparse.ArgumentParser()
- parser.add_argument('prior')
- parser.add_argument('--loss_function', default='barnll')
- # Optional Arg's for `--loss_function barnll`
- parser.add_argument('--min_y', type=float, help='barnll can only model y in strict ranges, this is the minimum y can take.')
- parser.add_argument('--max_y', type=float, help='barnll can only model y in strict ranges, this is the maximum y can take.')
- parser.add_argument('--num_buckets', default=100, type=int)
- #parser.add_argument('--num_features', default=None, type=int, help='Specify depending on the prior.')
- parser.add_argument("--extra_prior_kwargs_dict", default={'fuse_x_y': False}, dest="extra_prior_kwargs_dict", action=StoreDictKeyPair, nargs="+", metavar="KEY=VAL", help='Specify depending on the prior.')
- parser.add_argument('--encoder', default='linear', type=str, help='Specify depending on the prior.')
- parser.add_argument('--y_encoder', default='linear', type=str, help='Specify depending on the prior. You should specify this if you do not fuse x and y.')
- parser.add_argument('--pos_encoder', default='sinus', type=str, help='Specify depending on the prior.')
- parser.add_argument('--bptt', default=10, type=int)
- parser.add_argument('--epochs', default=200, type=int)
- parser.add_argument('--warmup_epochs', default=50, type=int)
- parser.add_argument('--validation_period', default=10, type=int)
- parser.add_argument('--permutation_invariant_max_eval_pos', default=None, type=int, help='Set this to an int to ')
- parser.add_argument('--permutation_invariant_sampling', default='weighted', help="Only relevant if --permutation_invariant_max_eval_pos is set.")
-
- # these can likely be mostly left at defaults
- parser.add_argument('--emsize', default=512, type=int) # sometimes even larger is better e.g. 1024
- parser.add_argument('--nlayers', default=6, type=int)
- parser.add_argument('--nhid', default=None, type=int) # 2*emsize is the default
- parser.add_argument('--nhead', default=4, type=int) # nhead = emsize / 64 in the original paper
- parser.add_argument('--dropout', default=.0, type=float)
- parser.add_argument('--steps_per_epoch', default=10, type=int)
- parser.add_argument('--batch_size', default=1000, type=int)
- parser.add_argument('--lr', '--learning_rate', default=.001, type=float) # try also .0003, .0001, go lower with lower batch size
-
- args, _ = _parse_args(config_parser, parser)
-
- if args.nhid is None:
- args.nhid = 2*args.emsize
-
- prior = args.__dict__.pop('prior')
-
- if prior == 'gp':
- prior = priors.fast_gp.DataLoader
- elif prior == 'ridge':
- prior = priors.ridge.DataLoader
- elif prior == 'stroke':
- prior = priors.stroke.DataLoader
- elif prior == 'mix_gp':
- prior = priors.fast_gp_mix.DataLoader
- else:
- raise NotImplementedError(f'Prior == {prior}.')
-
- loss_function = args.__dict__.pop('loss_function')
-
- criterion = nn.GaussianNLLLoss(reduction='none', full=True)
- classificiation_criterion = nn.CrossEntropyLoss(reduction='none')
- num_buckets = args.__dict__.pop('num_buckets')
- max_y = args.__dict__.pop('max_y')
- min_y = args.__dict__.pop('min_y')
- # criterion = nn.MSELoss(reduction='none')
-
- def get_y_sample():
- dl = prior(num_steps=1, batch_size=args.batch_size * args.steps_per_epoch, seq_len=args.bptt, device=device,
- **args.extra_prior_kwargs_dict)
- y_sample = next(iter(dl))[-1]
- print(f'Creating Bar distribution with borders from y sample of size {y_sample.numel()}')
- return y_sample
-
- if loss_function == 'ce':
- criterion = nn.CrossEntropyLoss(reduction='none')
- elif loss_function == 'gaussnll':
- criterion = nn.GaussianNLLLoss(reduction='none', full=True)
- elif loss_function == 'mse':
- criterion = nn.MSELoss(reduction='none')
- elif loss_function == 'barnll':
- criterion = BarDistribution(borders=get_bucket_limits(num_buckets, full_range=(min_y,max_y)))
- elif loss_function == 'adaptivebarnll':
- borders = get_bucket_limits(num_buckets, ys=get_y_sample(), full_range=(min_y,max_y))
- criterion = BarDistribution(borders=borders)
- elif loss_function == 'adaptivefullsupportbarnll':
- assert min_y is None and max_y is None, "Please do not specify `min_y` and `max_y` with `unboundedadaptivebarnll`."
- borders = get_bucket_limits(num_buckets, ys=get_y_sample())
- criterion = FullSupportBarDistribution(borders=borders)
- else:
- raise NotImplementedError(f'loss_function == {loss_function}.')
-
-
-
- encoder = args.__dict__.pop('encoder')
- y_encoder = args.__dict__.pop('y_encoder')
-
- def get_encoder_generator(encoder):
- if encoder == 'linear':
- encoder_generator = encoders.Linear
- elif encoder == 'mlp':
- encoder_generator = encoders.MLP
- elif encoder == 'positional':
- encoder_generator = encoders.Positional
- else:
- raise NotImplementedError(f'A {encoder} encoder is not valid.')
- return encoder_generator
-
- encoder_generator = get_encoder_generator(encoder)
- y_encoder_generator = get_encoder_generator(y_encoder)
-
- pos_encoder = args.__dict__.pop('pos_encoder')
-
- if pos_encoder == 'none':
- pos_encoder_generator = None
- elif pos_encoder == 'sinus':
- pos_encoder_generator = positional_encodings.PositionalEncoding
- elif pos_encoder == 'learned':
- pos_encoder_generator = positional_encodings.LearnedPositionalEncoding
- elif pos_encoder == 'paired_scrambled_learned':
- pos_encoder_generator = positional_encodings.PairedScrambledPositionalEncodings
- else:
- raise NotImplementedError(f'pos_encoer == {pos_encoder} is not valid.')
-
- permutation_invariant_max_eval_pos = args.__dict__.pop('permutation_invariant_max_eval_pos')
- permutation_invariant_sampling = args.__dict__.pop('permutation_invariant_sampling')
- if permutation_invariant_max_eval_pos is not None:
- if permutation_invariant_sampling == 'weighted':
- get_sampler = get_weighted_single_eval_pos_sampler
- elif permutation_invariant_sampling == 'uniform':
- get_sampler = get_uniform_single_eval_pos_sampler
- else:
- raise ValueError()
- args.__dict__['single_eval_pos_gen'] = get_sampler(permutation_invariant_max_eval_pos)
-
-
- print("ARGS for `train`:", args.__dict__)
-
- train(prior, criterion, encoder_generator,
- y_encoder_generator=y_encoder_generator, pos_encoder_generator=pos_encoder_generator,
- **args.__dict__)
-
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/discovery.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/discovery.py
deleted file mode 100644
index 3110b72794f1e4fd75254bdb3dbf81a89918596e..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/discovery.py
+++ /dev/null
@@ -1,611 +0,0 @@
-"""Automatic discovery of Python modules and packages (for inclusion in the
-distribution) and other config values.
-
-For the purposes of this module, the following nomenclature is used:
-
-- "src-layout": a directory representing a Python project that contains a "src"
- folder. Everything under the "src" folder is meant to be included in the
- distribution when packaging the project. Example::
-
- .
- ├── tox.ini
- ├── pyproject.toml
- └── src/
- └── mypkg/
- ├── __init__.py
- ├── mymodule.py
- └── my_data_file.txt
-
-- "flat-layout": a Python project that does not use "src-layout" but instead
- have a directory under the project root for each package::
-
- .
- ├── tox.ini
- ├── pyproject.toml
- └── mypkg/
- ├── __init__.py
- ├── mymodule.py
- └── my_data_file.txt
-
-- "single-module": a project that contains a single Python script direct under
- the project root (no directory used)::
-
- .
- ├── tox.ini
- ├── pyproject.toml
- └── mymodule.py
-
-"""
-
-import itertools
-import os
-from fnmatch import fnmatchcase
-from glob import glob
-from pathlib import Path
-from typing import (
- TYPE_CHECKING,
- Dict,
- Iterable,
- Iterator,
- List,
- Mapping,
- Optional,
- Tuple,
- Union
-)
-
-import _distutils_hack.override # noqa: F401
-
-from distutils import log
-from distutils.util import convert_path
-
-_Path = Union[str, os.PathLike]
-StrIter = Iterator[str]
-
-chain_iter = itertools.chain.from_iterable
-
-if TYPE_CHECKING:
- from setuptools import Distribution # noqa
-
-
-def _valid_name(path: _Path) -> bool:
- # Ignore invalid names that cannot be imported directly
- return os.path.basename(path).isidentifier()
-
-
-class _Filter:
- """
- Given a list of patterns, create a callable that will be true only if
- the input matches at least one of the patterns.
- """
-
- def __init__(self, *patterns: str):
- self._patterns = dict.fromkeys(patterns)
-
- def __call__(self, item: str) -> bool:
- return any(fnmatchcase(item, pat) for pat in self._patterns)
-
- def __contains__(self, item: str) -> bool:
- return item in self._patterns
-
-
-class _Finder:
- """Base class that exposes functionality for module/package finders"""
-
- ALWAYS_EXCLUDE: Tuple[str, ...] = ()
- DEFAULT_EXCLUDE: Tuple[str, ...] = ()
-
- @classmethod
- def find(
- cls,
- where: _Path = '.',
- exclude: Iterable[str] = (),
- include: Iterable[str] = ('*',)
- ) -> List[str]:
- """Return a list of all Python items (packages or modules, depending on
- the finder implementation) found within directory 'where'.
-
- 'where' is the root directory which will be searched.
- It should be supplied as a "cross-platform" (i.e. URL-style) path;
- it will be converted to the appropriate local path syntax.
-
- 'exclude' is a sequence of names to exclude; '*' can be used
- as a wildcard in the names.
- When finding packages, 'foo.*' will exclude all subpackages of 'foo'
- (but not 'foo' itself).
-
- 'include' is a sequence of names to include.
- If it's specified, only the named items will be included.
- If it's not specified, all found items will be included.
- 'include' can contain shell style wildcard patterns just like
- 'exclude'.
- """
-
- exclude = exclude or cls.DEFAULT_EXCLUDE
- return list(
- cls._find_iter(
- convert_path(str(where)),
- _Filter(*cls.ALWAYS_EXCLUDE, *exclude),
- _Filter(*include),
- )
- )
-
- @classmethod
- def _find_iter(cls, where: _Path, exclude: _Filter, include: _Filter) -> StrIter:
- raise NotImplementedError
-
-
-class PackageFinder(_Finder):
- """
- Generate a list of all Python packages found within a directory
- """
-
- ALWAYS_EXCLUDE = ("ez_setup", "*__pycache__")
-
- @classmethod
- def _find_iter(cls, where: _Path, exclude: _Filter, include: _Filter) -> StrIter:
- """
- All the packages found in 'where' that pass the 'include' filter, but
- not the 'exclude' filter.
- """
- for root, dirs, files in os.walk(str(where), followlinks=True):
- # Copy dirs to iterate over it, then empty dirs.
- all_dirs = dirs[:]
- dirs[:] = []
-
- for dir in all_dirs:
- full_path = os.path.join(root, dir)
- rel_path = os.path.relpath(full_path, where)
- package = rel_path.replace(os.path.sep, '.')
-
- # Skip directory trees that are not valid packages
- if '.' in dir or not cls._looks_like_package(full_path, package):
- continue
-
- # Should this package be included?
- if include(package) and not exclude(package):
- yield package
-
- # Early pruning if there is nothing else to be scanned
- if f"{package}*" in exclude or f"{package}.*" in exclude:
- continue
-
- # Keep searching subdirectories, as there may be more packages
- # down there, even if the parent was excluded.
- dirs.append(dir)
-
- @staticmethod
- def _looks_like_package(path: _Path, _package_name: str) -> bool:
- """Does a directory look like a package?"""
- return os.path.isfile(os.path.join(path, '__init__.py'))
-
-
-class PEP420PackageFinder(PackageFinder):
- @staticmethod
- def _looks_like_package(_path: _Path, _package_name: str) -> bool:
- return True
-
-
-class ModuleFinder(_Finder):
- """Find isolated Python modules.
- This function will **not** recurse subdirectories.
- """
-
- @classmethod
- def _find_iter(cls, where: _Path, exclude: _Filter, include: _Filter) -> StrIter:
- for file in glob(os.path.join(where, "*.py")):
- module, _ext = os.path.splitext(os.path.basename(file))
-
- if not cls._looks_like_module(module):
- continue
-
- if include(module) and not exclude(module):
- yield module
-
- _looks_like_module = staticmethod(_valid_name)
-
-
-# We have to be extra careful in the case of flat layout to not include files
-# and directories not meant for distribution (e.g. tool-related)
-
-
-class FlatLayoutPackageFinder(PEP420PackageFinder):
- _EXCLUDE = (
- "ci",
- "bin",
- "doc",
- "docs",
- "documentation",
- "manpages",
- "news",
- "changelog",
- "test",
- "tests",
- "unit_test",
- "unit_tests",
- "example",
- "examples",
- "scripts",
- "tools",
- "util",
- "utils",
- "python",
- "build",
- "dist",
- "venv",
- "env",
- "requirements",
- # ---- Task runners / Build tools ----
- "tasks", # invoke
- "fabfile", # fabric
- "site_scons", # SCons
- # ---- Other tools ----
- "benchmark",
- "benchmarks",
- "exercise",
- "exercises",
- "htmlcov", # Coverage.py
- # ---- Hidden directories/Private packages ----
- "[._]*",
- )
-
- DEFAULT_EXCLUDE = tuple(chain_iter((p, f"{p}.*") for p in _EXCLUDE))
- """Reserved package names"""
-
- @staticmethod
- def _looks_like_package(_path: _Path, package_name: str) -> bool:
- names = package_name.split('.')
- # Consider PEP 561
- root_pkg_is_valid = names[0].isidentifier() or names[0].endswith("-stubs")
- return root_pkg_is_valid and all(name.isidentifier() for name in names[1:])
-
-
-class FlatLayoutModuleFinder(ModuleFinder):
- DEFAULT_EXCLUDE = (
- "setup",
- "conftest",
- "test",
- "tests",
- "example",
- "examples",
- "build",
- # ---- Task runners ----
- "toxfile",
- "noxfile",
- "pavement",
- "dodo",
- "tasks",
- "fabfile",
- # ---- Other tools ----
- "[Ss][Cc]onstruct", # SCons
- "conanfile", # Connan: C/C++ build tool
- "manage", # Django
- "benchmark",
- "benchmarks",
- "exercise",
- "exercises",
- # ---- Hidden files/Private modules ----
- "[._]*",
- )
- """Reserved top-level module names"""
-
-
-def _find_packages_within(root_pkg: str, pkg_dir: _Path) -> List[str]:
- nested = PEP420PackageFinder.find(pkg_dir)
- return [root_pkg] + [".".join((root_pkg, n)) for n in nested]
-
-
-class ConfigDiscovery:
- """Fill-in metadata and options that can be automatically derived
- (from other metadata/options, the file system or conventions)
- """
-
- def __init__(self, distribution: "Distribution"):
- self.dist = distribution
- self._called = False
- self._disabled = False
- self._skip_ext_modules = False
-
- def _disable(self):
- """Internal API to disable automatic discovery"""
- self._disabled = True
-
- def _ignore_ext_modules(self):
- """Internal API to disregard ext_modules.
-
- Normally auto-discovery would not be triggered if ``ext_modules`` are set
- (this is done for backward compatibility with existing packages relying on
- ``setup.py`` or ``setup.cfg``). However, ``setuptools`` can call this function
- to ignore given ``ext_modules`` and proceed with the auto-discovery if
- ``packages`` and ``py_modules`` are not given (e.g. when using pyproject.toml
- metadata).
- """
- self._skip_ext_modules = True
-
- @property
- def _root_dir(self) -> _Path:
- # The best is to wait until `src_root` is set in dist, before using _root_dir.
- return self.dist.src_root or os.curdir
-
- @property
- def _package_dir(self) -> Dict[str, str]:
- if self.dist.package_dir is None:
- return {}
- return self.dist.package_dir
-
- def __call__(self, force=False, name=True, ignore_ext_modules=False):
- """Automatically discover missing configuration fields
- and modifies the given ``distribution`` object in-place.
-
- Note that by default this will only have an effect the first time the
- ``ConfigDiscovery`` object is called.
-
- To repeatedly invoke automatic discovery (e.g. when the project
- directory changes), please use ``force=True`` (or create a new
- ``ConfigDiscovery`` instance).
- """
- if force is False and (self._called or self._disabled):
- # Avoid overhead of multiple calls
- return
-
- self._analyse_package_layout(ignore_ext_modules)
- if name:
- self.analyse_name() # depends on ``packages`` and ``py_modules``
-
- self._called = True
-
- def _explicitly_specified(self, ignore_ext_modules: bool) -> bool:
- """``True`` if the user has specified some form of package/module listing"""
- ignore_ext_modules = ignore_ext_modules or self._skip_ext_modules
- ext_modules = not (self.dist.ext_modules is None or ignore_ext_modules)
- return (
- self.dist.packages is not None
- or self.dist.py_modules is not None
- or ext_modules
- or hasattr(self.dist, "configuration") and self.dist.configuration
- # ^ Some projects use numpy.distutils.misc_util.Configuration
- )
-
- def _analyse_package_layout(self, ignore_ext_modules: bool) -> bool:
- if self._explicitly_specified(ignore_ext_modules):
- # For backward compatibility, just try to find modules/packages
- # when nothing is given
- return True
-
- log.debug(
- "No `packages` or `py_modules` configuration, performing "
- "automatic discovery."
- )
-
- return (
- self._analyse_explicit_layout()
- or self._analyse_src_layout()
- # flat-layout is the trickiest for discovery so it should be last
- or self._analyse_flat_layout()
- )
-
- def _analyse_explicit_layout(self) -> bool:
- """The user can explicitly give a package layout via ``package_dir``"""
- package_dir = self._package_dir.copy() # don't modify directly
- package_dir.pop("", None) # This falls under the "src-layout" umbrella
- root_dir = self._root_dir
-
- if not package_dir:
- return False
-
- log.debug(f"`explicit-layout` detected -- analysing {package_dir}")
- pkgs = chain_iter(
- _find_packages_within(pkg, os.path.join(root_dir, parent_dir))
- for pkg, parent_dir in package_dir.items()
- )
- self.dist.packages = list(pkgs)
- log.debug(f"discovered packages -- {self.dist.packages}")
- return True
-
- def _analyse_src_layout(self) -> bool:
- """Try to find all packages or modules under the ``src`` directory
- (or anything pointed by ``package_dir[""]``).
-
- The "src-layout" is relatively safe for automatic discovery.
- We assume that everything within is meant to be included in the
- distribution.
-
- If ``package_dir[""]`` is not given, but the ``src`` directory exists,
- this function will set ``package_dir[""] = "src"``.
- """
- package_dir = self._package_dir
- src_dir = os.path.join(self._root_dir, package_dir.get("", "src"))
- if not os.path.isdir(src_dir):
- return False
-
- log.debug(f"`src-layout` detected -- analysing {src_dir}")
- package_dir.setdefault("", os.path.basename(src_dir))
- self.dist.package_dir = package_dir # persist eventual modifications
- self.dist.packages = PEP420PackageFinder.find(src_dir)
- self.dist.py_modules = ModuleFinder.find(src_dir)
- log.debug(f"discovered packages -- {self.dist.packages}")
- log.debug(f"discovered py_modules -- {self.dist.py_modules}")
- return True
-
- def _analyse_flat_layout(self) -> bool:
- """Try to find all packages and modules under the project root.
-
- Since the ``flat-layout`` is more dangerous in terms of accidentally including
- extra files/directories, this function is more conservative and will raise an
- error if multiple packages or modules are found.
-
- This assumes that multi-package dists are uncommon and refuse to support that
- use case in order to be able to prevent unintended errors.
- """
- log.debug(f"`flat-layout` detected -- analysing {self._root_dir}")
- return self._analyse_flat_packages() or self._analyse_flat_modules()
-
- def _analyse_flat_packages(self) -> bool:
- self.dist.packages = FlatLayoutPackageFinder.find(self._root_dir)
- top_level = remove_nested_packages(remove_stubs(self.dist.packages))
- log.debug(f"discovered packages -- {self.dist.packages}")
- self._ensure_no_accidental_inclusion(top_level, "packages")
- return bool(top_level)
-
- def _analyse_flat_modules(self) -> bool:
- self.dist.py_modules = FlatLayoutModuleFinder.find(self._root_dir)
- log.debug(f"discovered py_modules -- {self.dist.py_modules}")
- self._ensure_no_accidental_inclusion(self.dist.py_modules, "modules")
- return bool(self.dist.py_modules)
-
- def _ensure_no_accidental_inclusion(self, detected: List[str], kind: str):
- if len(detected) > 1:
- from inspect import cleandoc
-
- from setuptools.errors import PackageDiscoveryError
-
- msg = f"""Multiple top-level {kind} discovered in a flat-layout: {detected}.
-
- To avoid accidental inclusion of unwanted files or directories,
- setuptools will not proceed with this build.
-
- If you are trying to create a single distribution with multiple {kind}
- on purpose, you should not rely on automatic discovery.
- Instead, consider the following options:
-
- 1. set up custom discovery (`find` directive with `include` or `exclude`)
- 2. use a `src-layout`
- 3. explicitly set `py_modules` or `packages` with a list of names
-
- To find more information, look for "package discovery" on setuptools docs.
- """
- raise PackageDiscoveryError(cleandoc(msg))
-
- def analyse_name(self):
- """The packages/modules are the essential contribution of the author.
- Therefore the name of the distribution can be derived from them.
- """
- if self.dist.metadata.name or self.dist.name:
- # get_name() is not reliable (can return "UNKNOWN")
- return None
-
- log.debug("No `name` configuration, performing automatic discovery")
-
- name = (
- self._find_name_single_package_or_module()
- or self._find_name_from_packages()
- )
- if name:
- self.dist.metadata.name = name
-
- def _find_name_single_package_or_module(self) -> Optional[str]:
- """Exactly one module or package"""
- for field in ('packages', 'py_modules'):
- items = getattr(self.dist, field, None) or []
- if items and len(items) == 1:
- log.debug(f"Single module/package detected, name: {items[0]}")
- return items[0]
-
- return None
-
- def _find_name_from_packages(self) -> Optional[str]:
- """Try to find the root package that is not a PEP 420 namespace"""
- if not self.dist.packages:
- return None
-
- packages = remove_stubs(sorted(self.dist.packages, key=len))
- package_dir = self.dist.package_dir or {}
-
- parent_pkg = find_parent_package(packages, package_dir, self._root_dir)
- if parent_pkg:
- log.debug(f"Common parent package detected, name: {parent_pkg}")
- return parent_pkg
-
- log.warn("No parent package detected, impossible to derive `name`")
- return None
-
-
-def remove_nested_packages(packages: List[str]) -> List[str]:
- """Remove nested packages from a list of packages.
-
- >>> remove_nested_packages(["a", "a.b1", "a.b2", "a.b1.c1"])
- ['a']
- >>> remove_nested_packages(["a", "b", "c.d", "c.d.e.f", "g.h", "a.a1"])
- ['a', 'b', 'c.d', 'g.h']
- """
- pkgs = sorted(packages, key=len)
- top_level = pkgs[:]
- size = len(pkgs)
- for i, name in enumerate(reversed(pkgs)):
- if any(name.startswith(f"{other}.") for other in top_level):
- top_level.pop(size - i - 1)
-
- return top_level
-
-
-def remove_stubs(packages: List[str]) -> List[str]:
- """Remove type stubs (:pep:`561`) from a list of packages.
-
- >>> remove_stubs(["a", "a.b", "a-stubs", "a-stubs.b.c", "b", "c-stubs"])
- ['a', 'a.b', 'b']
- """
- return [pkg for pkg in packages if not pkg.split(".")[0].endswith("-stubs")]
-
-
-def find_parent_package(
- packages: List[str], package_dir: Mapping[str, str], root_dir: _Path
-) -> Optional[str]:
- """Find the parent package that is not a namespace."""
- packages = sorted(packages, key=len)
- common_ancestors = []
- for i, name in enumerate(packages):
- if not all(n.startswith(f"{name}.") for n in packages[i+1:]):
- # Since packages are sorted by length, this condition is able
- # to find a list of all common ancestors.
- # When there is divergence (e.g. multiple root packages)
- # the list will be empty
- break
- common_ancestors.append(name)
-
- for name in common_ancestors:
- pkg_path = find_package_path(name, package_dir, root_dir)
- init = os.path.join(pkg_path, "__init__.py")
- if os.path.isfile(init):
- return name
-
- return None
-
-
-def find_package_path(
- name: str, package_dir: Mapping[str, str], root_dir: _Path
-) -> str:
- """Given a package name, return the path where it should be found on
- disk, considering the ``package_dir`` option.
-
- >>> path = find_package_path("my.pkg", {"": "root/is/nested"}, ".")
- >>> path.replace(os.sep, "/")
- './root/is/nested/my/pkg'
-
- >>> path = find_package_path("my.pkg", {"my": "root/is/nested"}, ".")
- >>> path.replace(os.sep, "/")
- './root/is/nested/pkg'
-
- >>> path = find_package_path("my.pkg", {"my.pkg": "root/is/nested"}, ".")
- >>> path.replace(os.sep, "/")
- './root/is/nested'
-
- >>> path = find_package_path("other.pkg", {"my.pkg": "root/is/nested"}, ".")
- >>> path.replace(os.sep, "/")
- './other/pkg'
- """
- parts = name.split(".")
- for i in range(len(parts), 0, -1):
- # Look backwards, the most specific package_dir first
- partial_name = ".".join(parts[:i])
- if partial_name in package_dir:
- parent = package_dir[partial_name]
- return os.path.join(root_dir, parent, *parts[i:])
-
- parent = package_dir.get("") or ""
- return os.path.join(root_dir, *parent.split("/"), *parts)
-
-
-def construct_package_dir(packages: List[str], package_path: _Path) -> Dict[str, str]:
- parent_pkgs = remove_nested_packages(packages)
- prefix = Path(package_path).parts
- return {pkg: "/".join([*prefix, *pkg.split(".")]) for pkg in parent_pkgs}
diff --git a/spaces/TechWithAnirudh/eachadea-vicuna-13b/app.py b/spaces/TechWithAnirudh/eachadea-vicuna-13b/app.py
deleted file mode 100644
index addb55d6c6dfbaed9c20434d7cbd90c667360c9a..0000000000000000000000000000000000000000
--- a/spaces/TechWithAnirudh/eachadea-vicuna-13b/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/eachadea/vicuna-13b").launch()
\ No newline at end of file
diff --git a/spaces/Tetel/secondbing/EdgeGPT/conversation.py b/spaces/Tetel/secondbing/EdgeGPT/conversation.py
deleted file mode 100644
index b3952edfe5acd44192548e098ee451100dcf4a12..0000000000000000000000000000000000000000
--- a/spaces/Tetel/secondbing/EdgeGPT/conversation.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import json
-import os
-from typing import List
-from typing import Union
-from .constants import HEADER_IMG_UPLOAD
-import httpx
-import random
-
-from .constants import HEADERS_INIT_CONVER
-from .exceptions import NotAllowedToAccess
-
-
-class Conversation:
- def __init__(
- self,
- proxy: Union[str, None] = None,
- async_mode: bool = False,
- cookies: Union[List[dict], None] = None,
- ) -> None:
- if async_mode:
- return
- self.struct: dict = {
- "conversationId": None,
- "clientId": None,
- "conversationSignature": None,
- "result": {"value": "Success", "message": None},
- }
- self.img_id: dict = {
- "blobId": None,
- "processedBlobId": None,
- }
- self.proxy = proxy
- proxy = (
- proxy
- or os.environ.get("all_proxy")
- or os.environ.get("ALL_PROXY")
- or os.environ.get("https_proxy")
- or os.environ.get("HTTPS_PROXY")
- or None
- )
- if proxy is not None and proxy.startswith("socks5h://"):
- proxy = "socks5://" + proxy[len("socks5h://") :]
- self.session = httpx.Client(
- proxies=proxy,
- timeout=900,
- headers=HEADERS_INIT_CONVER,
- )
- if cookies:
- for cookie in cookies:
- self.session.cookies.set(cookie["name"], cookie["value"])
- # Send GET request
- response = self.session.get(
- url=os.environ.get("BING_PROXY_URL")
- or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
- )
- if response.status_code != 200:
- print(f"Status code: {response.status_code}")
- print(response.text)
- print(response.url)
- raise Exception("Authentication failed")
- try:
- self.struct = response.json()
- except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
- raise Exception(
- "Authentication failed. You have not been accepted into the beta.",
- ) from exc
- if self.struct["result"]["value"] == "UnauthorizedRequest":
- raise NotAllowedToAccess(self.struct["result"]["message"])
-
- @staticmethod
- async def create(
- proxy: Union[str, None] = None,
- cookies: Union[List[dict], None] = None,
- imageInput: str | None = None
- ) -> "Conversation":
- self = Conversation(async_mode=True)
- self.struct = {
- "conversationId": None,
- "clientId": None,
- "conversationSignature": None,
- "result": {"value": "Success", "message": None},
- }
- self.img_id = {
- "blobId": None,
- "processedBlobId": None,
- }
- self.proxy = proxy
- proxy = (
- proxy
- or os.environ.get("all_proxy")
- or os.environ.get("ALL_PROXY")
- or os.environ.get("https_proxy")
- or os.environ.get("HTTPS_PROXY")
- or None
- )
- if proxy is not None and proxy.startswith("socks5h://"):
- proxy = "socks5://" + proxy[len("socks5h://") :]
- transport = httpx.AsyncHTTPTransport(retries=900)
- # Convert cookie format to httpx format
- formatted_cookies = None
- if cookies:
- formatted_cookies = httpx.Cookies()
- for cookie in cookies:
- formatted_cookies.set(cookie["name"], cookie["value"])
- async with httpx.AsyncClient(
- proxies=proxy,
- timeout=30,
- headers={
- **HEADERS_INIT_CONVER,
- "x-forwarded-for": f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
- },
- transport=transport,
- cookies=formatted_cookies,
- ) as client:
- # GET BlobId
- if imageInput:
- files = {
- 'knowledgeRequest': (None,
- '{"imageInfo":{},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"","convotone":"Creative"}}}'),
- 'imageBase64': (None, imageInput)
- }
- response_img = await client.post(
- url="https://www.bing.com/images/kblob",
- headers=HEADER_IMG_UPLOAD,
- files=files,
- follow_redirects=True,
- )
- if response_img.status_code != 200:
- print(f"Status code: {response_img.status_code}")
- print(response_img.text)
- print(response_img.url)
- raise Exception("Authentication failed")
- try:
- self.img_id = response_img.json()
- except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
- print(response_img.text)
- raise Exception(
- "Authentication failed. You have not been accepted into the beta.",
- ) from exc
-
- response = await client.get(
- url=os.environ.get("BING_PROXY_URL")
- or "https://www.bing.com/turing/conversation/create",
- follow_redirects=True,
- )
-
- if response.status_code != 200:
- print(f"Status code: {response.status_code}")
- print(response.text)
- print(response.url)
- raise Exception("Authentication failed")
-
- try:
- self.struct = response.json()
- except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
- print(response.text)
- raise Exception(
- "Authentication failed. You have not been accepted into the beta.",
- ) from exc
- if self.struct["result"]["value"] == "UnauthorizedRequest":
- raise NotAllowedToAccess(self.struct["result"]["message"])
- return self
diff --git a/spaces/ThirdEyeData/Object_Detection/README.md b/spaces/ThirdEyeData/Object_Detection/README.md
deleted file mode 100644
index 8dcda5b173cb7131b0c394369f2e7eb312eb78ab..0000000000000000000000000000000000000000
--- a/spaces/ThirdEyeData/Object_Detection/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Object Detection
-emoji: 🚀
-colorFrom: pink
-colorTo: green
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Um124/Global_Warming_Analysis/pages/Income per Person GDP data Analysis.py b/spaces/Um124/Global_Warming_Analysis/pages/Income per Person GDP data Analysis.py
deleted file mode 100644
index f049602eb27018ffaaccc08df660412feea9d9db..0000000000000000000000000000000000000000
--- a/spaces/Um124/Global_Warming_Analysis/pages/Income per Person GDP data Analysis.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import pandas as pd
-import numpy as np
-import plotly.express as px
-import streamlit as st
-
-
-st.set_page_config(
- page_title='Income per Person GDP data Analysis',
- page_icon='📈',
- layout='wide'
-)
-
-Years=['1800','1801','1802','1803','1804','1805','1806','1807','1808','1809','1810','1811','1812','1813',
-'1814','1815','1816','1817','1818','1819','1820','1821','1822','1823','1824','1825','1826','1827','1828',
-'1829','1830','1831','1832','1833','1834','1835','1836','1837','1838','1839','1840','1841','1842','1843',
-'1844','1845','1846','1847','1848','1849','1850','1851','1852','1853','1854','1855','1856','1857','1858',
-'1859','1860','1861','1862','1863','1864','1865','1866','1867','1868','1869','1870','1871','1872','1873',
-'1874','1875','1876','1877','1878','1879','1880','1881','1882','1883','1884','1885','1886','1887','1888',
-'1889','1890','1891','1892','1893','1894','1895','1896','1897','1898','1899','1900','1901','1902','1903',
-'1904','1905','1906','1907','1908','1909','1910','1911','1912','1913','1914','1915','1916','1917','1918',
-'1919','1920','1921','1922','1923','1924','1925','1926','1927','1928','1929','1930','1931','1932','1933',
-'1934','1935','1936','1937','1938','1939','1940','1941','1942','1943','1944','1945','1946','1947','1948',
-'1949','1950','1951','1952','1953','1954','1955','1956','1957','1958','1959','1960','1961','1962','1963',
-'1964','1965','1966','1967','1968','1969','1970','1971','1972','1973','1974','1975','1976','1977','1978',
-'1979','1980','1981','1982','1983','1984','1985','1986','1987','1988','1989','1990','1991','1992','1993',
-'1994','1995','1996','1997','1998','1999','2000','2001','2002','2003','2004','2005','2006','2007','2008',
-'2009','2010','2011','2012','2013','2014','2015','2016','2017','2018']
-
-@st.cache_data
-def load_data():
- df=pd.read_csv('data/income_per_person_gdppercapita_ppp_inflation_adjusted.csv')
- df.rename(columns={'geo':'Country'},inplace=True)
- df.set_index('Country',inplace=True)
- df['Total'] = df[Years].sum(axis=1)
- df['Avgrage']=df.mean(axis=1)
- df['Maximum']=df.max(axis=1)
- df['Minimum']=df.min(axis=1)
- df.sort_index(inplace=True)
- return df
-
-st.title('Income per Person GDP per Capital ppp Inflation Adjusted')
-df = load_data()
-st.dataframe(df,use_container_width=True)
-
-countries= df.index.unique().tolist()
-Graphs = ['bar','pie','line','area','funnel']
-c1,c2 = st.columns(2)
-country = c1.selectbox("Select a Country", countries)
-Graph = c2.selectbox("Select a Graph type", Graphs)
-
-st.header("Country wise visualization")
-cdf = df.loc[country,Years].reset_index()
-cdf.rename({'index':'Years'},axis=1, inplace=True)
-if Graph == Graphs[0]:
- fig = px.bar(cdf, 'Years',country, title=f'{country} Income per Person GDP per Capital ppp Inflation Adjusted')
-if Graph == Graphs[1]:
- fig = px.pie(cdf, 'Years',country, title=f'{country} Income per Person GDP per Capital ppp Inflation Adjusted')
-if Graph == Graphs[2]:
- fig = px.line(cdf, 'Years',country, title=f'{country} Income per Person GDP per Capital ppp Inflation Adjusted')
-if Graph == Graphs[3]:
- fig = px.area(cdf, 'Years',country, title=f'{country} Income per Person GDP per Capital ppp Inflation Adjusted')
-if Graph == Graphs[4]:
- fig = px.funnel(cdf, 'Years',country, title=f'{country} Income per Person GDP per Capital ppp Inflation Adjusted')
-st.plotly_chart(fig, use_container_width=True)
-
-st.header("Comparison of Countries")
-clist = st.multiselect("Select countries to compare", countries, default='India')
-cdf = df.loc[clist, Years].T # T to rotate the data in 90deg
-st.write(cdf)
-figc = px.line(cdf,cdf.index, clist, title=f'Comparing {", ".join(clist)}')
-
-st.plotly_chart(figc, use_container_width=True)
-
-
-df.sort_values(by='Total', ascending=False, inplace=True)
-fig1=px.bar(df, x=df.index, y='Total',title='Total Income per Person GDP per Capital ppp Inflation Adjusted')
-st.plotly_chart(fig1, use_container_width=True)
-
-dfavg = df.sort_values(by='Avgrage').reset_index()
-dfavg.rename({'index':'Country'},axis=1,inplace=True)
-fig2=px.bar(dfavg, 'Country', 'Avgrage', title="Avgrage Income per Person GDP per Capital ppp Inflation Adjusted by Country")
-st.plotly_chart(fig2, use_container_width=True)
-
-dfmax=df.sort_values(by='Maximum').reset_index()
-dfmax.rename({'index':'Country'},axis=1,inplace=True)
-fig3=px.bar(dfmax,'Country','Maximum',title='Maximum Income per Person GDP per Capital ppp Inflation Adjusted by the Country')
-st.plotly_chart(fig3, use_container_width=True)
-
-dfmin=df.sort_values(by='Minimum').reset_index()
-dfmin.rename({'index':'Country'},axis=1,inplace=True)
-fig4=px.bar(dfmin,'Country','Minimum',title='Minimum Income per Person GDP per Capital ppp Inflation Adjusted by the Country' )
-st.plotly_chart(fig4,use_container_width=True)
-
-dfcomp=df.sort_values(by='Country',ascending=False,inplace=True)
-fig5 = px.line(df, x=df.index, y='Maximum',title='Maximum and Minimum Income per Person GDP per Capital ppp Inflation Adjusted comparisons')
-fig5.add_scatter(x=df.index, y=df['Minimum'], mode='lines',)
-st.plotly_chart(fig5, use_container_width=True)
\ No newline at end of file
diff --git a/spaces/VIPLab/Track-Anything/demo.py b/spaces/VIPLab/Track-Anything/demo.py
deleted file mode 100644
index bf5d4d2129751906128f6db9b37070f41b89ac1a..0000000000000000000000000000000000000000
--- a/spaces/VIPLab/Track-Anything/demo.py
+++ /dev/null
@@ -1,87 +0,0 @@
-from metaseg import SegAutoMaskPredictor, SegManualMaskPredictor, SahiAutoSegmentation, sahi_sliced_predict
-
-# For image
-
-def automask_image_app(image_path, model_type, points_per_side, points_per_batch, min_area):
- SegAutoMaskPredictor().image_predict(
- source=image_path,
- model_type=model_type, # vit_l, vit_h, vit_b
- points_per_side=points_per_side,
- points_per_batch=points_per_batch,
- min_area=min_area,
- output_path="output.png",
- show=False,
- save=True,
- )
- return "output.png"
-
-
-# For video
-
-def automask_video_app(video_path, model_type, points_per_side, points_per_batch, min_area):
- SegAutoMaskPredictor().video_predict(
- source=video_path,
- model_type=model_type, # vit_l, vit_h, vit_b
- points_per_side=points_per_side,
- points_per_batch=points_per_batch,
- min_area=min_area,
- output_path="output.mp4",
- )
- return "output.mp4"
-
-
-# For manuel box and point selection
-
-def manual_app(image_path, model_type, input_point, input_label, input_box, multimask_output, random_color):
- SegManualMaskPredictor().image_predict(
- source=image_path,
- model_type=model_type, # vit_l, vit_h, vit_b
- input_point=input_point,
- input_label=input_label,
- input_box=input_box,
- multimask_output=multimask_output,
- random_color=random_color,
- output_path="output.png",
- show=False,
- save=True,
- )
- return "output.png"
-
-
-# For sahi sliced prediction
-
-def sahi_autoseg_app(
- image_path,
- sam_model_type,
- detection_model_type,
- detection_model_path,
- conf_th,
- image_size,
- slice_height,
- slice_width,
- overlap_height_ratio,
- overlap_width_ratio,
-):
- boxes = sahi_sliced_predict(
- image_path=image_path,
- detection_model_type=detection_model_type, # yolov8, detectron2, mmdetection, torchvision
- detection_model_path=detection_model_path,
- conf_th=conf_th,
- image_size=image_size,
- slice_height=slice_height,
- slice_width=slice_width,
- overlap_height_ratio=overlap_height_ratio,
- overlap_width_ratio=overlap_width_ratio,
- )
-
- SahiAutoSegmentation().predict(
- source=image_path,
- model_type=sam_model_type,
- input_box=boxes,
- multimask_output=False,
- random_color=False,
- show=False,
- save=True,
- )
-
- return "output.png"
diff --git a/spaces/VivianShi/Coconet-Pytorch/README.md b/spaces/VivianShi/Coconet-Pytorch/README.md
deleted file mode 100644
index 4d1a6e9c64971e77277b2cb27ee9c0a22445fdd4..0000000000000000000000000000000000000000
--- a/spaces/VivianShi/Coconet-Pytorch/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Coconet Pytorch
-emoji: 👁
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Vrk/SkimLit/Tokenizer.py b/spaces/Vrk/SkimLit/Tokenizer.py
deleted file mode 100644
index 7986c5c8ea4d0cfa009437a571a413d2b23cfce2..0000000000000000000000000000000000000000
--- a/spaces/Vrk/SkimLit/Tokenizer.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import numpy as np
-import json
-
-class Tokenizer(object):
- def __init__(self, char_level, num_tokens=None,
- pad_token="", oov_token="",
- token_to_index=None):
- self.char_level = char_level
- self.separator = "" if self.char_level else " "
- if num_tokens: num_tokens -= 2 # pad + unk tokens
- self.num_tokens = num_tokens
- self.pad_token = pad_token
- self.oov_token = oov_token
- if not token_to_index:
- token_to_index = {pad_token: 0, oov_token: 1}
- self.token_to_index = token_to_index
- self.index_to_token = {v: k for k, v in self.token_to_index.items()}
-
- def __len__(self):
- return len(self.token_to_index)
-
- def __str__(self):
- return f""
-
- def fit_on_texts(self, texts):
- if not self.char_level:
- texts = [text.split(" ") for text in texts]
- all_tokens = [token for text in texts for token in text]
- counts = Counter(all_tokens).most_common(self.num_tokens)
- self.min_token_freq = counts[-1][1]
- for token, count in counts:
- index = len(self)
- self.token_to_index[token] = index
- self.index_to_token[index] = token
- return self
-
- def texts_to_sequences(self, texts):
- sequences = []
- for text in texts:
- if not self.char_level:
- text = text.split(" ")
- sequence = []
- for token in text:
- sequence.append(self.token_to_index.get(
- token, self.token_to_index[self.oov_token]))
- sequences.append(np.asarray(sequence))
- return sequences
-
- def sequences_to_texts(self, sequences):
- texts = []
- for sequence in sequences:
- text = []
- for index in sequence:
- text.append(self.index_to_token.get(index, self.oov_token))
- texts.append(self.separator.join([token for token in text]))
- return texts
-
- def save(self, fp):
- with open(fp, "w") as fp:
- contents = {
- "char_level": self.char_level,
- "oov_token": self.oov_token,
- "token_to_index": self.token_to_index
- }
- json.dump(contents, fp, indent=4, sort_keys=False)
-
- @classmethod
- def load(cls, fp):
- with open(fp, "r") as fp:
- kwargs = json.load(fp=fp)
- return cls(**kwargs)
diff --git a/spaces/Warvito/diffusion_brain/README.md b/spaces/Warvito/diffusion_brain/README.md
deleted file mode 100644
index eb489712d3506572a239c26036acfd2a311f9cef..0000000000000000000000000000000000000000
--- a/spaces/Warvito/diffusion_brain/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Brain Diffusion
-emoji: 🏢🧠
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/models.py b/spaces/XzJosh/Bekki-Bert-VITS2/models.py
deleted file mode 100644
index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Bekki-Bert-VITS2/models.py
+++ /dev/null
@@ -1,707 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-
-from commons import init_weights, get_padding
-from text import symbols, num_tones, num_languages
-class DurationDiscriminator(nn.Module): #vits2
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.dur_proj = nn.Conv1d(1, filter_channels, 1)
-
- self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_1 = modules.LayerNorm(filter_channels)
- self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_2 = modules.LayerNorm(filter_channels)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- self.output_layer = nn.Sequential(
- nn.Linear(filter_channels, 1),
- nn.Sigmoid()
- )
-
- def forward_probability(self, x, x_mask, dur, g=None):
- dur = self.dur_proj(dur)
- x = torch.cat([x, dur], dim=1)
- x = self.pre_out_conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_1(x)
- x = self.drop(x)
- x = self.pre_out_conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_2(x)
- x = self.drop(x)
- x = x * x_mask
- x = x.transpose(1, 2)
- output_prob = self.output_layer(x)
- return output_prob
-
- def forward(self, x, x_mask, dur_r, dur_hat, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
-
- output_probs = []
- for dur in [dur_r, dur_hat]:
- output_prob = self.forward_probability(x, x_mask, dur, g)
- output_probs.append(output_prob)
-
- return output_probs
-
-class TransformerCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- n_flows=4,
- gin_channels=0,
- share_parameter=False
- ):
-
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
-
- self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None
-
- for i in range(n_flows):
- self.flows.append(
- modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
- logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=0):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
- self.emb = nn.Embedding(len(symbols), hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
- self.tone_emb = nn.Embedding(num_tones, hidden_channels)
- nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5)
- self.language_emb = nn.Embedding(num_languages, hidden_channels)
- nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5)
- self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, tone, language, bert, g=None):
- x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask, g=g)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-class ReferenceEncoder(nn.Module):
- '''
- inputs --- [N, Ty/r, n_mels*r] mels
- outputs --- [N, ref_enc_gru_size]
- '''
-
- def __init__(self, spec_channels, gin_channels=0):
-
- super().__init__()
- self.spec_channels = spec_channels
- ref_enc_filters = [32, 32, 64, 64, 128, 128]
- K = len(ref_enc_filters)
- filters = [1] + ref_enc_filters
- convs = [weight_norm(nn.Conv2d(in_channels=filters[i],
- out_channels=filters[i + 1],
- kernel_size=(3, 3),
- stride=(2, 2),
- padding=(1, 1))) for i in range(K)]
- self.convs = nn.ModuleList(convs)
- # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)])
-
- out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K)
- self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels,
- hidden_size=256 // 2,
- batch_first=True)
- self.proj = nn.Linear(128, gin_channels)
-
- def forward(self, inputs, mask=None):
- N = inputs.size(0)
- out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs]
- for conv in self.convs:
- out = conv(out)
- # out = wn(out)
- out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K]
-
- out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
- T = out.size(1)
- N = out.size(0)
- out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
-
- self.gru.flatten_parameters()
- memory, out = self.gru(out) # out --- [1, N, 128]
-
- return self.proj(out.squeeze(0))
-
- def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
- for i in range(n_convs):
- L = (L - kernel_size + 2 * pad) // stride + 1
- return L
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=256,
- gin_channels=256,
- use_sdp=True,
- n_flow_layer = 4,
- n_layers_trans_flow = 3,
- flow_share_parameter = False,
- use_transformer_flow = True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
- self.n_layers_trans_flow = n_layers_trans_flow
- self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True)
- self.use_sdp = use_sdp
- self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False)
- self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01)
- self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6)
- self.current_mas_noise_scale = self.mas_noise_scale_initial
- if self.use_spk_conditioned_encoder and gin_channels > 0:
- self.enc_gin_channels = gin_channels
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.enc_gin_channels)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
- gin_channels=gin_channels)
- if use_transformer_flow:
- self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter)
- else:
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels)
- self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers >= 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
- else:
- self.ref_enc = ReferenceEncoder(spec_channels, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert):
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
- s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
- if self.use_noise_scaled_mas:
- epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale
- neg_cent = neg_cent + epsilon
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
-
- l_length_sdp = self.sdp(x, x_mask, w, g=g)
- l_length_sdp = l_length_sdp / torch.sum(x_mask)
-
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
-
- l_length = l_length_dp + l_length_sdp
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_)
-
- def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None):
- #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert)
- # g = self.gst(y)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
diff --git a/spaces/XzJosh/Bella-Bert-VITS2/commons.py b/spaces/XzJosh/Bella-Bert-VITS2/commons.py
deleted file mode 100644
index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Bella-Bert-VITS2/commons.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/XzJosh/maimai-Bert-VITS2/mel_processing.py b/spaces/XzJosh/maimai-Bert-VITS2/mel_processing.py
deleted file mode 100644
index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/maimai-Bert-VITS2/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/XzJosh/nanami-Bert-VITS2/text/japanese.py b/spaces/XzJosh/nanami-Bert-VITS2/text/japanese.py
deleted file mode 100644
index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/nanami-Bert-VITS2/text/japanese.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py
-import re
-import sys
-
-import pyopenjtalk
-
-from text import symbols
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def preprocess_jap(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = []
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- p = pyopenjtalk.g2p(sentence)
- text += p.split(" ")
-
- if i < len(marks):
- text += [marks[i].replace(' ', '')]
- return text
-
-def text_normalize(text):
- # todo: jap text normalize
- return text
-
-def g2p(norm_text):
- phones = preprocess_jap(norm_text)
- phones = [post_replace_ph(i) for i in phones]
- # todo: implement tones and word2ph
- tones = [0 for i in phones]
- word2ph = [1 for i in phones]
- return phones, tones, word2ph
-
-
-if __name__ == '__main__':
- for line in open("../../../Downloads/transcript_utf8.txt").readlines():
- text = line.split(":")[1]
- phones, tones, word2ph = g2p(text)
- for p in phones:
- if p == "z":
- print(text, phones)
- sys.exit(0)
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/common/data/coco.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/common/data/coco.py
deleted file mode 100644
index 703c4385c7ddc7eb0759c98d102ab2384d6a9e3e..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/common/data/coco.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from omegaconf import OmegaConf
-
-import detectron2.data.transforms as T
-from detectron2.config import LazyCall as L
-from detectron2.data import (
- DatasetMapper,
- build_detection_test_loader,
- build_detection_train_loader,
- get_detection_dataset_dicts,
-)
-from detectron2.evaluation import COCOEvaluator
-
-dataloader = OmegaConf.create()
-
-dataloader.train = L(build_detection_train_loader)(
- dataset=L(get_detection_dataset_dicts)(names="coco_2017_train"),
- mapper=L(DatasetMapper)(
- is_train=True,
- augmentations=[
- L(T.ResizeShortestEdge)(
- short_edge_length=(640, 672, 704, 736, 768, 800),
- sample_style="choice",
- max_size=1333,
- ),
- L(T.RandomFlip)(horizontal=True),
- ],
- image_format="BGR",
- use_instance_mask=True,
- ),
- total_batch_size=16,
- num_workers=4,
-)
-
-dataloader.test = L(build_detection_test_loader)(
- dataset=L(get_detection_dataset_dicts)(names="coco_2017_val", filter_empty=False),
- mapper=L(DatasetMapper)(
- is_train=False,
- augmentations=[
- L(T.ResizeShortestEdge)(short_edge_length=800, max_size=1333),
- ],
- image_format="${...train.mapper.image_format}",
- ),
- num_workers=4,
-)
-
-dataloader.evaluator = L(COCOEvaluator)(
- dataset_name="${..test.dataset.names}",
-)
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py
deleted file mode 100644
index 74ac123a7aed6cd77d6d833446a831d9048745b2..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import copy
-import io
-import logging
-import numpy as np
-from typing import List
-import onnx
-import torch
-from caffe2.proto import caffe2_pb2
-from caffe2.python import core
-from caffe2.python.onnx.backend import Caffe2Backend
-from tabulate import tabulate
-from termcolor import colored
-from torch.onnx import OperatorExportTypes
-
-from .shared import (
- ScopedWS,
- construct_init_net_from_params,
- fuse_alias_placeholder,
- fuse_copy_between_cpu_and_gpu,
- get_params_from_init_net,
- group_norm_replace_aten_with_caffe2,
- infer_device_type,
- remove_dead_end_ops,
- remove_reshape_for_fc,
- save_graph,
-)
-
-logger = logging.getLogger(__name__)
-
-
-def export_onnx_model(model, inputs):
- """
- Trace and export a model to onnx format.
-
- Args:
- model (nn.Module):
- inputs (tuple[args]): the model will be called by `model(*inputs)`
-
- Returns:
- an onnx model
- """
- assert isinstance(model, torch.nn.Module)
-
- # make sure all modules are in eval mode, onnx may change the training state
- # of the module if the states are not consistent
- def _check_eval(module):
- assert not module.training
-
- model.apply(_check_eval)
-
- # Export the model to ONNX
- with torch.no_grad():
- with io.BytesIO() as f:
- torch.onnx.export(
- model,
- inputs,
- f,
- operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
- # verbose=True, # NOTE: uncomment this for debugging
- # export_params=True,
- )
- onnx_model = onnx.load_from_string(f.getvalue())
-
- # Apply ONNX's Optimization
- all_passes = onnx.optimizer.get_available_passes()
- passes = ["fuse_bn_into_conv"]
- assert all(p in all_passes for p in passes)
- onnx_model = onnx.optimizer.optimize(onnx_model, passes)
- return onnx_model
-
-
-def _op_stats(net_def):
- type_count = {}
- for t in [op.type for op in net_def.op]:
- type_count[t] = type_count.get(t, 0) + 1
- type_count_list = sorted(type_count.items(), key=lambda kv: kv[0]) # alphabet
- type_count_list = sorted(type_count_list, key=lambda kv: -kv[1]) # count
- return "\n".join("{:>4}x {}".format(count, name) for name, count in type_count_list)
-
-
-def _assign_device_option(
- predict_net: caffe2_pb2.NetDef, init_net: caffe2_pb2.NetDef, tensor_inputs: List[torch.Tensor]
-):
- """
- ONNX exported network doesn't have concept of device, assign necessary
- device option for each op in order to make it runable on GPU runtime.
- """
-
- def _get_device_type(torch_tensor):
- assert torch_tensor.device.type in ["cpu", "cuda"]
- assert torch_tensor.device.index == 0
- return torch_tensor.device.type
-
- def _assign_op_device_option(net_proto, net_ssa, blob_device_types):
- for op, ssa_i in zip(net_proto.op, net_ssa):
- if op.type in ["CopyCPUToGPU", "CopyGPUToCPU"]:
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
- else:
- devices = [blob_device_types[b] for b in ssa_i[0] + ssa_i[1]]
- assert all(d == devices[0] for d in devices)
- if devices[0] == "cuda":
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
-
- # update ops in predict_net
- predict_net_input_device_types = {
- (name, 0): _get_device_type(tensor)
- for name, tensor in zip(predict_net.external_input, tensor_inputs)
- }
- predict_net_device_types = infer_device_type(
- predict_net, known_status=predict_net_input_device_types, device_name_style="pytorch"
- )
- predict_net_ssa, _ = core.get_ssa(predict_net)
- _assign_op_device_option(predict_net, predict_net_ssa, predict_net_device_types)
-
- # update ops in init_net
- init_net_ssa, versions = core.get_ssa(init_net)
- init_net_output_device_types = {
- (name, versions[name]): predict_net_device_types[(name, 0)]
- for name in init_net.external_output
- }
- init_net_device_types = infer_device_type(
- init_net, known_status=init_net_output_device_types, device_name_style="pytorch"
- )
- _assign_op_device_option(init_net, init_net_ssa, init_net_device_types)
-
-
-def export_caffe2_detection_model(model: torch.nn.Module, tensor_inputs: List[torch.Tensor]):
- """
- Export a caffe2-compatible Detectron2 model to caffe2 format via ONNX.
-
- Arg:
- model: a caffe2-compatible version of detectron2 model, defined in caffe2_modeling.py
- tensor_inputs: a list of tensors that caffe2 model takes as input.
- """
- model = copy.deepcopy(model)
- assert isinstance(model, torch.nn.Module)
- assert hasattr(model, "encode_additional_info")
-
- # Export via ONNX
- logger.info(
- "Exporting a {} model via ONNX ...".format(type(model).__name__)
- + " Some warnings from ONNX are expected and are usually not to worry about."
- )
- onnx_model = export_onnx_model(model, (tensor_inputs,))
- # Convert ONNX model to Caffe2 protobuf
- init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model)
- ops_table = [[op.type, op.input, op.output] for op in predict_net.op]
- table = tabulate(ops_table, headers=["type", "input", "output"], tablefmt="pipe")
- logger.info(
- "ONNX export Done. Exported predict_net (before optimizations):\n" + colored(table, "cyan")
- )
-
- # Apply protobuf optimization
- fuse_alias_placeholder(predict_net, init_net)
- if any(t.device.type != "cpu" for t in tensor_inputs):
- fuse_copy_between_cpu_and_gpu(predict_net)
- remove_dead_end_ops(init_net)
- _assign_device_option(predict_net, init_net, tensor_inputs)
- params, device_options = get_params_from_init_net(init_net)
- predict_net, params = remove_reshape_for_fc(predict_net, params)
- init_net = construct_init_net_from_params(params, device_options)
- group_norm_replace_aten_with_caffe2(predict_net)
-
- # Record necessary information for running the pb model in Detectron2 system.
- model.encode_additional_info(predict_net, init_net)
-
- logger.info("Operators used in predict_net: \n{}".format(_op_stats(predict_net)))
- logger.info("Operators used in init_net: \n{}".format(_op_stats(init_net)))
-
- return predict_net, init_net
-
-
-def run_and_save_graph(predict_net, init_net, tensor_inputs, graph_save_path):
- """
- Run the caffe2 model on given inputs, recording the shape and draw the graph.
-
- predict_net/init_net: caffe2 model.
- tensor_inputs: a list of tensors that caffe2 model takes as input.
- graph_save_path: path for saving graph of exported model.
- """
-
- logger.info("Saving graph of ONNX exported model to {} ...".format(graph_save_path))
- save_graph(predict_net, graph_save_path, op_only=False)
-
- # Run the exported Caffe2 net
- logger.info("Running ONNX exported model ...")
- with ScopedWS("__ws_tmp__", True) as ws:
- ws.RunNetOnce(init_net)
- initialized_blobs = set(ws.Blobs())
- uninitialized = [inp for inp in predict_net.external_input if inp not in initialized_blobs]
- for name, blob in zip(uninitialized, tensor_inputs):
- ws.FeedBlob(name, blob)
-
- try:
- ws.RunNetOnce(predict_net)
- except RuntimeError as e:
- logger.warning("Encountered RuntimeError: \n{}".format(str(e)))
-
- ws_blobs = {b: ws.FetchBlob(b) for b in ws.Blobs()}
- blob_sizes = {b: ws_blobs[b].shape for b in ws_blobs if isinstance(ws_blobs[b], np.ndarray)}
-
- logger.info("Saving graph with blob shapes to {} ...".format(graph_save_path))
- save_graph(predict_net, graph_save_path, op_only=False, blob_sizes=blob_sizes)
-
- return ws_blobs
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/caffe2_patch.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/caffe2_patch.py
deleted file mode 100644
index c9eee594a27cdec29ce5f2b6f7730171eda3805e..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/caffe2_patch.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import contextlib
-from unittest import mock
-import torch
-
-from detectron2.modeling import poolers
-from detectron2.modeling.proposal_generator import rpn
-from detectron2.modeling.roi_heads import keypoint_head, mask_head
-from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
-
-from .c10 import (
- Caffe2Compatible,
- Caffe2FastRCNNOutputsInference,
- Caffe2KeypointRCNNInference,
- Caffe2MaskRCNNInference,
- Caffe2ROIPooler,
- Caffe2RPN,
-)
-
-
-class GenericMixin(object):
- pass
-
-
-class Caffe2CompatibleConverter(object):
- """
- A GenericUpdater which implements the `create_from` interface, by modifying
- module object and assign it with another class replaceCls.
- """
-
- def __init__(self, replaceCls):
- self.replaceCls = replaceCls
-
- def create_from(self, module):
- # update module's class to the new class
- assert isinstance(module, torch.nn.Module)
- if issubclass(self.replaceCls, GenericMixin):
- # replaceCls should act as mixin, create a new class on-the-fly
- new_class = type(
- "{}MixedWith{}".format(self.replaceCls.__name__, module.__class__.__name__),
- (self.replaceCls, module.__class__),
- {}, # {"new_method": lambda self: ...},
- )
- module.__class__ = new_class
- else:
- # replaceCls is complete class, this allow arbitrary class swap
- module.__class__ = self.replaceCls
-
- # initialize Caffe2Compatible
- if isinstance(module, Caffe2Compatible):
- module.tensor_mode = False
-
- return module
-
-
-def patch(model, target, updater, *args, **kwargs):
- """
- recursively (post-order) update all modules with the target type and its
- subclasses, make a initialization/composition/inheritance/... via the
- updater.create_from.
- """
- for name, module in model.named_children():
- model._modules[name] = patch(module, target, updater, *args, **kwargs)
- if isinstance(model, target):
- return updater.create_from(model, *args, **kwargs)
- return model
-
-
-def patch_generalized_rcnn(model):
- ccc = Caffe2CompatibleConverter
- model = patch(model, rpn.RPN, ccc(Caffe2RPN))
- model = patch(model, poolers.ROIPooler, ccc(Caffe2ROIPooler))
-
- return model
-
-
-@contextlib.contextmanager
-def mock_fastrcnn_outputs_inference(
- tensor_mode, check=True, box_predictor_type=FastRCNNOutputLayers
-):
- with mock.patch.object(
- box_predictor_type,
- "inference",
- autospec=True,
- side_effect=Caffe2FastRCNNOutputsInference(tensor_mode),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_mask_rcnn_inference(tensor_mode, patched_module, check=True):
- with mock.patch(
- "{}.mask_rcnn_inference".format(patched_module), side_effect=Caffe2MaskRCNNInference()
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_keypoint_rcnn_inference(tensor_mode, patched_module, use_heatmap_max_keypoint, check=True):
- with mock.patch(
- "{}.keypoint_rcnn_inference".format(patched_module),
- side_effect=Caffe2KeypointRCNNInference(use_heatmap_max_keypoint),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-class ROIHeadsPatcher:
- def __init__(self, heads, use_heatmap_max_keypoint):
- self.heads = heads
- self.use_heatmap_max_keypoint = use_heatmap_max_keypoint
-
- @contextlib.contextmanager
- def mock_roi_heads(self, tensor_mode=True):
- """
- Patching several inference functions inside ROIHeads and its subclasses
-
- Args:
- tensor_mode (bool): whether the inputs/outputs are caffe2's tensor
- format or not. Default to True.
- """
- # NOTE: this requries the `keypoint_rcnn_inference` and `mask_rcnn_inference`
- # are called inside the same file as BaseXxxHead due to using mock.patch.
- kpt_heads_mod = keypoint_head.BaseKeypointRCNNHead.__module__
- mask_head_mod = mask_head.BaseMaskRCNNHead.__module__
-
- mock_ctx_managers = [
- mock_fastrcnn_outputs_inference(
- tensor_mode=tensor_mode,
- check=True,
- box_predictor_type=type(self.heads.box_predictor),
- )
- ]
- if getattr(self.heads, "keypoint_on", False):
- mock_ctx_managers += [
- mock_keypoint_rcnn_inference(
- tensor_mode, kpt_heads_mod, self.use_heatmap_max_keypoint
- )
- ]
- if getattr(self.heads, "mask_on", False):
- mock_ctx_managers += [mock_mask_rcnn_inference(tensor_mode, mask_head_mod)]
-
- with contextlib.ExitStack() as stack: # python 3.3+
- for mgr in mock_ctx_managers:
- stack.enter_context(mgr)
- yield
diff --git a/spaces/Yunshansongbai/SVC-Nahida/hubert/hubert_model_onnx.py b/spaces/Yunshansongbai/SVC-Nahida/hubert/hubert_model_onnx.py
deleted file mode 100644
index 86864321a3b8c5e9fc0f688285f1cc72844a63ee..0000000000000000000000000000000000000000
--- a/spaces/Yunshansongbai/SVC-Nahida/hubert/hubert_model_onnx.py
+++ /dev/null
@@ -1,217 +0,0 @@
-import copy
-import random
-from typing import Optional, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as t_func
-from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
-
-
-class Hubert(nn.Layer):
- def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
- super().__init__()
- self._mask = mask
- self.feature_extractor = FeatureExtractor()
- self.feature_projection = FeatureProjection()
- self.positional_embedding = PositionalConvEmbedding()
- self.norm = nn.LayerNorm(768)
- self.dropout = nn.Dropout(0.1)
- self.encoder = TransformerEncoder(
- nn.TransformerEncoderLayer(
- 768, 12, 3072, activation="gelu", batch_first=True
- ),
- 12,
- )
- self.proj = nn.Linear(768, 256)
-
- self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
- self.label_embedding = nn.Embedding(num_label_embeddings, 256)
-
- def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- mask = None
- if self.training and self._mask:
- mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
- x[mask] = self.masked_spec_embed.to(x.dtype)
- return x, mask
-
- def encode(
- self, x: torch.Tensor, layer: Optional[int] = None
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- x = self.feature_extractor(x)
- x = self.feature_projection(x.transpose(1, 2))
- x, mask = self.mask(x)
- x = x + self.positional_embedding(x)
- x = self.dropout(self.norm(x))
- x = self.encoder(x, output_layer=layer)
- return x, mask
-
- def logits(self, x: torch.Tensor) -> torch.Tensor:
- logits = torch.cosine_similarity(
- x.unsqueeze(2),
- self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
- dim=-1,
- )
- return logits / 0.1
-
-
-class HubertSoft(Hubert):
- def __init__(self):
- super().__init__()
-
- def units(self, wav: torch.Tensor) -> torch.Tensor:
- wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
- x, _ = self.encode(wav)
- return self.proj(x)
-
- def forward(self, x):
- return self.units(x)
-
-class FeatureExtractor(nn.Layer):
- def __init__(self):
- super().__init__()
- self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
- self.norm0 = nn.GroupNorm(512, 512)
- self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
- self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = t_func.gelu(self.norm0(self.conv0(x)))
- x = t_func.gelu(self.conv1(x))
- x = t_func.gelu(self.conv2(x))
- x = t_func.gelu(self.conv3(x))
- x = t_func.gelu(self.conv4(x))
- x = t_func.gelu(self.conv5(x))
- x = t_func.gelu(self.conv6(x))
- return x
-
-
-class FeatureProjection(nn.Layer):
- def __init__(self):
- super().__init__()
- self.norm = nn.LayerNorm(512)
- self.projection = nn.Linear(512, 768)
- self.dropout = nn.Dropout(0.1)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.norm(x)
- x = self.projection(x)
- x = self.dropout(x)
- return x
-
-
-class PositionalConvEmbedding(nn.Layer):
- def __init__(self):
- super().__init__()
- self.conv = nn.Conv1d(
- 768,
- 768,
- kernel_size=128,
- padding=128 // 2,
- groups=16,
- )
- self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.conv(x.transpose(1, 2))
- x = t_func.gelu(x[:, :, :-1])
- return x.transpose(1, 2)
-
-
-class TransformerEncoder(nn.Layer):
- def __init__(
- self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
- ) -> None:
- super(TransformerEncoder, self).__init__()
- self.layers = nn.LayerList(
- [copy.deepcopy(encoder_layer) for _ in range(num_layers)]
- )
- self.num_layers = num_layers
-
- def forward(
- self,
- src: torch.Tensor,
- mask: torch.Tensor = None,
- src_key_padding_mask: torch.Tensor = None,
- output_layer: Optional[int] = None,
- ) -> torch.Tensor:
- output = src
- for layer in self.layers[:output_layer]:
- output = layer(
- output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
- )
- return output
-
-
-def _compute_mask(
- shape: Tuple[int, int],
- mask_prob: float,
- mask_length: int,
- device: torch.device,
- min_masks: int = 0,
-) -> torch.Tensor:
- batch_size, sequence_length = shape
-
- if mask_length < 1:
- raise ValueError("`mask_length` has to be bigger than 0.")
-
- if mask_length > sequence_length:
- raise ValueError(
- f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
- )
-
- # compute number of masked spans in batch
- num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
- num_masked_spans = max(num_masked_spans, min_masks)
-
- # make sure num masked indices <= sequence_length
- if num_masked_spans * mask_length > sequence_length:
- num_masked_spans = sequence_length // mask_length
-
- # SpecAugment mask to fill
- mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
-
- # uniform distribution to sample from, make sure that offset samples are < sequence_length
- uniform_dist = torch.ones(
- (batch_size, sequence_length - (mask_length - 1)), device=device
- )
-
- # get random indices to mask
- mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
-
- # expand masked indices to masked spans
- mask_indices = (
- mask_indices.unsqueeze(dim=-1)
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- offsets = (
- torch.arange(mask_length, device=device)[None, None, :]
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- mask_idxs = mask_indices + offsets
-
- # scatter indices to mask
- mask = mask.scatter(1, mask_idxs, True)
-
- return mask
-
-
-def hubert_soft(
- path: str,
-) -> HubertSoft:
- r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
- Args:
- path (str): path of a pretrained model
- """
- hubert = HubertSoft()
- checkpoint = torch.load(path)
- consume_prefix_in_state_dict_if_present(checkpoint, "module.")
- hubert.load_state_dict(checkpoint)
- hubert.eval()
- return hubert
diff --git a/spaces/Zhenhong/text-to-speech-SpeechT5-demo/app.py b/spaces/Zhenhong/text-to-speech-SpeechT5-demo/app.py
deleted file mode 100644
index 06abd199599b975b8dbb006753a1983c0d352b2c..0000000000000000000000000000000000000000
--- a/spaces/Zhenhong/text-to-speech-SpeechT5-demo/app.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import gradio as gr
-import librosa
-import numpy as np
-import torch
-
-from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
-
-
-checkpoint = "microsoft/speecht5_tts"
-processor = SpeechT5Processor.from_pretrained(checkpoint)
-model = SpeechT5ForTextToSpeech.from_pretrained(checkpoint)
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
-
-
-speaker_embeddings = {
- "BDL": "speaker/cmu_us_bdl_arctic-wav-arctic_a0009.npy",
- "CLB": "speaker/cmu_us_clb_arctic-wav-arctic_a0144.npy",
- "KSP": "speaker/cmu_us_ksp_arctic-wav-arctic_b0087.npy",
- "RMS": "speaker/cmu_us_rms_arctic-wav-arctic_b0353.npy",
- "SLT": "speaker/cmu_us_slt_arctic-wav-arctic_a0508.npy",
-}
-
-
-def predict(text, speaker):
- if len(text.strip()) == 0:
- return (16000, np.zeros(0).astype(np.int16))
-
- inputs = processor(text=text, return_tensors="pt")
-
- # limit input length
- input_ids = inputs["input_ids"]
- input_ids = input_ids[..., :model.config.max_text_positions]
-
- if speaker == "Surprise Me!":
- # load one of the provided speaker embeddings at random
- idx = np.random.randint(len(speaker_embeddings))
- key = list(speaker_embeddings.keys())[idx]
- speaker_embedding = np.load(speaker_embeddings[key])
-
- # randomly shuffle the elements
- np.random.shuffle(speaker_embedding)
-
- # randomly flip half the values
- x = (np.random.rand(512) >= 0.5) * 1.0
- x[x == 0] = -1.0
- speaker_embedding *= x
-
- #speaker_embedding = np.random.rand(512).astype(np.float32) * 0.3 - 0.15
- else:
- speaker_embedding = np.load(speaker_embeddings[speaker[:3]])
-
- speaker_embedding = torch.tensor(speaker_embedding).unsqueeze(0)
-
- speech = model.generate_speech(input_ids, speaker_embedding, vocoder=vocoder)
-
- speech = (speech.numpy() * 32767).astype(np.int16)
- return (16000, speech)
-
-
-title = "Text-to-Speech based on SpeechT5"
-
-description = """
-The SpeechT5 model is pre-trained on text as well as speech inputs, with targets that are also a mix of text and speech.
-By pre-training on text and speech at the same time, it learns unified representations for both, resulting in improved modeling capabilities.
-
-This space demonstrates the text-to-speech (TTS) checkpoint for the English language.
-
-How to use: Enter some English text and choose a speaker. The output is a mel spectrogram, which is converted to a mono 16 kHz waveform by the HiFi-GAN vocoder. Because the model always applies random dropout, each attempt will give slightly different results.
-The Surprise Me! option creates a completely randomized speaker.
-"""
-
-article = """
-
-@article{Ao2021SpeechT5,
- title = {SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing},
- author = {Junyi Ao and Rui Wang and Long Zhou and Chengyi Wang and Shuo Ren and Yu Wu and Shujie Liu and Tom Ko and Qing Li and Yu Zhang and Zhihua Wei and Yao Qian and Jinyu Li and Furu Wei},
- eprint={2110.07205},
- archivePrefix={arXiv},
- primaryClass={eess.AS},
- year={2021}
-}
-
-"""
-
-examples = [
- ["As a Data Scientist, I'll be demonstrating my speaking voice in this example. If you don't like my voice, you can choose a different one by setting the speaker parameter.", "BDL (male)"],
- ["The octopus and Oliver went to the opera in October.", "CLB (female)"],
- ["She sells seashells by the seashore. I saw a kitten eating chicken in the kitchen.", "RMS (male)"],
- ["Brisk brave brigadiers brandished broad bright blades, blunderbusses, and bludgeons—balancing them badly.", "SLT (female)"],
- ["A synonym for cinnamon is a cinnamon synonym.", "BDL (male)"],
- ["How much wood would a woodchuck chuck if a woodchuck could chuck wood? He would chuck, he would, as much as he could, and chuck as much wood as a woodchuck would if a woodchuck could chuck wood.", "CLB (female)"],
-]
-
-gr.Interface(
- fn=predict,
- inputs=[
- gr.Text(label="Input Text"),
- gr.Radio(label="Speaker", choices=[
- "BDL (male)",
- "CLB (female)",
- "KSP (male)",
- "RMS (male)",
- "SLT (female)",
- "Surprise Me!"
- ],
- value="BDL (male)"),
- ],
- outputs=[
- gr.Audio(label="Generated Speech", type="numpy"),
- ],
- title=title,
- description=description,
- article=article,
- examples=examples,
-).launch()
diff --git a/spaces/aabyzov/playground/README.md b/spaces/aabyzov/playground/README.md
deleted file mode 100644
index 2b6ba62aa7493718ece0de747bcecc800bd4d450..0000000000000000000000000000000000000000
--- a/spaces/aabyzov/playground/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Playground
-emoji: 🌖
-colorFrom: purple
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/abdvl/datahub_qa_bot/app.py b/spaces/abdvl/datahub_qa_bot/app.py
deleted file mode 100644
index 9e612099200a95735a8a419c7281d20cea6d496d..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/app.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import os
-import streamlit as st
-from langchain.chains import RetrievalQA
-from langchain.llms import OpenAI
-from langchain.document_loaders import DirectoryLoader
-from langchain.text_splitter import CharacterTextSplitter
-from langchain.indexes import VectorstoreIndexCreator
-from langchain.embeddings import OpenAIEmbeddings
-from langchain.vectorstores import Chroma
-from langchain.callbacks import get_openai_callback
-
-
-# variables
-db_folder = "db"
-qa = None
-db = None
-
-# init LLM and retriever
-def init(api_key):
- # API key
- os.environ["OPENAI_API_KEY"] = api_key
-
- # initialize the language model
- llm = OpenAI(model_name="gpt-3.5-turbo")
-
- with get_openai_callback() as cb:
- # create the embeddings and index
- embeddings = OpenAIEmbeddings()
- # Init vectorstore
- db = Chroma(persist_directory=db_folder, embedding_function=embeddings)
- # create retriever from the DB
- retriever = db.as_retriever(search_type="mmr")
- # initialize the chain
- qa = RetrievalQA.from_chain_type(
- llm=llm, chain_type="stuff", retriever=retriever)
- return qa, db
-
-
-# UI
-st.title('DataHub QA Chat Demo')
-
-# set your OpenAI API key
-api_key = st.text_input('OPENAI API KEY')
-
-# query input box
-question = st.text_input('Ask a question', 'What is DataHub')
-
-# query button
-if st.button('Query'):
- if api_key == "":
- st.write("Please provide your OpenAI API KEY, the query is cheap ($0.0001 per query))")
- else:
- qa, db = init(api_key)
- #
- st.subheader('Result from OpenAI after querying the vectorstore')
- st.write(qa.run(question))
- #
- st.subheader('Raw result from vectorstore')
- docs = db.similarity_search(question)
- result = docs[0]
- st.write(result.page_content)
- st.subheader('Source')
- st.write(result.metadata)
-
\ No newline at end of file
diff --git a/spaces/abdvl/datahub_qa_bot/docs/advanced/browse-paths-upgrade.md b/spaces/abdvl/datahub_qa_bot/docs/advanced/browse-paths-upgrade.md
deleted file mode 100644
index e440a35c3af462f0a5f88c71204fec16a9af6a3a..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/advanced/browse-paths-upgrade.md
+++ /dev/null
@@ -1,137 +0,0 @@
-# Browse Paths Upgrade (August 2022)
-
-## Background
-
-Up to this point, there's been a historical constraint on all entity browse paths. Namely, each browse path has been
-required to end with a path component that represents "simple name" for an entity. For example, a Browse Path for a
-Snowflake Table called "test_table" may look something like this:
-
-```
-/prod/snowflake/warehouse1/db1/test_table
-```
-
-In the UI, we artificially truncate the final path component when you are browsing the Entity hierarchy, so your browse experience
-would be:
-
-`prod` > `snowflake` > `warehouse1`> `db1` > `Click Entity`
-
-As you can see, the final path component `test_table` is effectively ignored. It could have any value, and we would still ignore
-it in the UI. This behavior serves as a workaround to the historical requirement that all browse paths end with a simple name.
-
-This data constraint stands in opposition the original intention of Browse Paths: to provide a simple mechanism for organizing
-assets into a hierarchical folder structure. For this reason, we've changed the semantics of Browse Paths to better align with the original intention.
-Going forward, you will not be required to provide a final component detailing the "name". Instead, you will be able to provide a simpler path that
-omits this final component:
-
-```
-/prod/snowflake/warehouse1/db1
-```
-
-and the browse experience from the UI will continue to work as you would expect:
-
-`prod` > `snowflake` > `warehouse1`> `db1` > `Click Entity`.
-
-With this change comes a fix to a longstanding bug where multiple browse paths could not be attached to a single URN. Going forward,
-we will support producing multiple browse paths for the same entity, and allow you to traverse via multiple paths. For example
-
-```python
-browse_path = BrowsePathsClass(
- paths=["/powerbi/my/custom/path", "/my/other/custom/path"]
-)
-return MetadataChangeProposalWrapper(
- entityType="dataset",
- changeType="UPSERT",
- entityUrn="urn:li:dataset:(urn:li:dataPlatform:custom,MyFileName,PROD),
- aspectName="browsePaths",
- aspect=browse_path,
-)
-```
-*Using the Python Emitter SDK to produce multiple Browse Paths for the same entity*
-
-We've received multiple bug reports, such as [this issue](https://github.com/datahub-project/datahub/issues/5525), and requests to address these issues with Browse, and thus are deciding
-to do it now before more workarounds are created.
-
-## What this means for you
-
-Once you upgrade to DataHub `v0.8.45` you will immediately notice that traversing your Browse Path hierarchy will require
-one extra click to find the entity. This is because we are correctly displaying the FULL browse path, including the simple name mentioned above.
-
-There will be 2 ways to upgrade to the new browse path format. Depending on your ingestion sources, you may want to use one or both:
-
-1. Migrate default browse paths to the new format by restarting DataHub
-2. Upgrade your version of the `datahub` CLI to push new browse path format (version `v0.8.45`)
-
-Each step will be discussed in detail below.
-
-### 1. Migrating default browse paths to the new format
-
-To migrate those Browse Paths that are generated by DataHub by default (when no path is provided), simply restart the `datahub-gms` container / pod with a single
-additional environment variable:
-
-```
-UPGRADE_DEFAULT_BROWSE_PATHS_ENABLED=true
-```
-
-And restart the `datahub-gms` instance. This will cause GMS to perform a boot-time migration of all your existing Browse Paths
-to the new format, removing the unnecessarily name component at the very end.
-
-If the migration is successful, you'll see the following in your GMS logs:
-
-```
-18:58:17.414 [main] INFO c.l.m.b.s.UpgradeDefaultBrowsePathsStep:60 - Successfully upgraded all browse paths!
-```
-
-After this one-time migration is complete, you should be able to navigate the Browse hierarchy exactly as you did previously.
-
-> Note that certain ingestion sources actively produce their own Browse Paths, which overrides the default path
-> computed by DataHub.
->
-> In these cases, getting the updated Browse Path will require re-running your ingestion process with the updated
-> version of the connector. This is discussed in more detail in the next section.
-
-### 2. Upgrading the `datahub` CLI to push new browse paths
-
-If you are actively ingesting metadata from one or more of following sources
-
-1. Sagemaker
-2. Looker / LookML
-3. Feast
-4. Kafka
-5. Mode
-6. PowerBi
-7. Pulsar
-8. Tableau
-9. Business Glossary
-
-You will need to upgrade the DataHub CLI to >= `v0.8.45` and re-run metadata ingestion. This will generate the new browse path format
-and overwrite the existing paths for entities that were extracted from these sources.
-
-### If you are producing custom Browse Paths
-
-If you've decided to produce your own custom Browse Paths to organize your assets (e.g. via the Python Emitter SDK), you'll want to change the code to produce those paths
-to truncate the final path component. For example, if you were previously emitting a browse path like this:
-
-```
-"my/custom/browse/path/suffix"
-```
-
-You can simply remove the final "suffix" piece:
-
-```
-"my/custom/browse/path"
-```
-
-Your users will be able to find the entity by traversing through these folders in the UI:
-
-`my` > `custom` > `browse`> `path` > `Click Entity`.
-
-
-> Note that if you are using the Browse Path Transformer you *will* be impacted in the same way. It is recommended that you revisit the
-> paths that you are producing, and update them to the new format.
-
-## Support
-
-The Acryl team will be on standby to assist you in your migration. Please
-join [#release-0_8_0](https://datahubspace.slack.com/archives/C0244FHMHJQ) channel and reach out to us if you find
-trouble with the upgrade or have feedback on the process. We will work closely to make sure you can continue to operate
-DataHub smoothly.
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/nl_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/nl_head.py
deleted file mode 100644
index 3eee424199e6aa363b564e2a3340a070db04db86..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/nl_head.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import torch
-from annotator.uniformer.mmcv.cnn import NonLocal2d
-
-from ..builder import HEADS
-from .fcn_head import FCNHead
-
-
-@HEADS.register_module()
-class NLHead(FCNHead):
- """Non-local Neural Networks.
-
- This head is the implementation of `NLNet
- `_.
-
- Args:
- reduction (int): Reduction factor of projection transform. Default: 2.
- use_scale (bool): Whether to scale pairwise_weight by
- sqrt(1/inter_channels). Default: True.
- mode (str): The nonlocal mode. Options are 'embedded_gaussian',
- 'dot_product'. Default: 'embedded_gaussian.'.
- """
-
- def __init__(self,
- reduction=2,
- use_scale=True,
- mode='embedded_gaussian',
- **kwargs):
- super(NLHead, self).__init__(num_convs=2, **kwargs)
- self.reduction = reduction
- self.use_scale = use_scale
- self.mode = mode
- self.nl_block = NonLocal2d(
- in_channels=self.channels,
- reduction=self.reduction,
- use_scale=self.use_scale,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- mode=self.mode)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- output = self.convs[0](x)
- output = self.nl_block(output)
- output = self.convs[1](output)
- if self.concat_input:
- output = self.conv_cat(torch.cat([x, output], dim=1))
- output = self.cls_seg(output)
- return output
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/arraymisc/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/arraymisc/__init__.py
deleted file mode 100644
index 4b4700d6139ae3d604ff6e542468cce4200c020c..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/arraymisc/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .quantization import dequantize, quantize
-
-__all__ = ['quantize', 'dequantize']
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/platforms/osmesa.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/platforms/osmesa.py
deleted file mode 100644
index deaa5ff44031a107883913ae9a18fc425d650f3d..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/platforms/osmesa.py
+++ /dev/null
@@ -1,59 +0,0 @@
-from .base import Platform
-
-
-__all__ = ['OSMesaPlatform']
-
-
-class OSMesaPlatform(Platform):
- """Renders into a software buffer using OSMesa. Requires special versions
- of OSMesa to be installed, plus PyOpenGL upgrade.
- """
-
- def __init__(self, viewport_width, viewport_height):
- super(OSMesaPlatform, self).__init__(viewport_width, viewport_height)
- self._context = None
- self._buffer = None
-
- def init_context(self):
- from OpenGL import arrays
- from OpenGL.osmesa import (
- OSMesaCreateContextAttribs, OSMESA_FORMAT,
- OSMESA_RGBA, OSMESA_PROFILE, OSMESA_CORE_PROFILE,
- OSMESA_CONTEXT_MAJOR_VERSION, OSMESA_CONTEXT_MINOR_VERSION,
- OSMESA_DEPTH_BITS
- )
-
- attrs = arrays.GLintArray.asArray([
- OSMESA_FORMAT, OSMESA_RGBA,
- OSMESA_DEPTH_BITS, 24,
- OSMESA_PROFILE, OSMESA_CORE_PROFILE,
- OSMESA_CONTEXT_MAJOR_VERSION, 3,
- OSMESA_CONTEXT_MINOR_VERSION, 3,
- 0
- ])
- self._context = OSMesaCreateContextAttribs(attrs, None)
- self._buffer = arrays.GLubyteArray.zeros(
- (self.viewport_height, self.viewport_width, 4)
- )
-
- def make_current(self):
- from OpenGL import GL as gl
- from OpenGL.osmesa import OSMesaMakeCurrent
- assert(OSMesaMakeCurrent(
- self._context, self._buffer, gl.GL_UNSIGNED_BYTE,
- self.viewport_width, self.viewport_height
- ))
-
- def make_uncurrent(self):
- """Make the OpenGL context uncurrent.
- """
- pass
-
- def delete_context(self):
- from OpenGL.osmesa import OSMesaDestroyContext
- OSMesaDestroyContext(self._context)
- self._context = None
- self._buffer = None
-
- def supports_framebuffers(self):
- return False
diff --git a/spaces/adorp/ControlNet-v1-1-duplicate/app_mlsd.py b/spaces/adorp/ControlNet-v1-1-duplicate/app_mlsd.py
deleted file mode 100644
index 073b0da202362716c6af5da7cb929981c78f7f20..0000000000000000000000000000000000000000
--- a/spaces/adorp/ControlNet-v1-1-duplicate/app_mlsd.py
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-
-from utils import randomize_seed_fn
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- image = gr.Image()
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button('Run')
- with gr.Accordion('Advanced options', open=False):
- num_samples = gr.Slider(label='Number of images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- preprocess_resolution = gr.Slider(
- label='Preprocess resolution',
- minimum=128,
- maximum=512,
- value=512,
- step=1)
- mlsd_value_threshold = gr.Slider(
- label='Hough value threshold (MLSD)',
- minimum=0.01,
- maximum=2.0,
- value=0.1,
- step=0.01)
- mlsd_distance_threshold = gr.Slider(
- label='Hough distance threshold (MLSD)',
- minimum=0.01,
- maximum=20.0,
- value=0.1,
- step=0.01)
- num_steps = gr.Slider(label='Number of steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- randomize=True)
- randomize_seed = gr.Checkbox(label='Randomize seed',
- value=True)
- a_prompt = gr.Textbox(
- label='Additional prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output', show_label=False).style(
- columns=2, object_fit='scale-down')
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- preprocess_resolution,
- num_steps,
- guidance_scale,
- seed,
- mlsd_value_threshold,
- mlsd_distance_threshold,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name='mlsd',
- )
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model(task_name='MLSD')
- demo = create_demo(model.process_mlsd)
- demo.queue().launch()
diff --git a/spaces/akhaliq/JoJoGAN/e4e/criteria/id_loss.py b/spaces/akhaliq/JoJoGAN/e4e/criteria/id_loss.py
deleted file mode 100644
index bab806172eff18c0630536ae96817508c3197b8b..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/JoJoGAN/e4e/criteria/id_loss.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import torch
-from torch import nn
-from configs.paths_config import model_paths
-from models.encoders.model_irse import Backbone
-
-
-class IDLoss(nn.Module):
- def __init__(self):
- super(IDLoss, self).__init__()
- print('Loading ResNet ArcFace')
- self.facenet = Backbone(input_size=112, num_layers=50, drop_ratio=0.6, mode='ir_se')
- self.facenet.load_state_dict(torch.load(model_paths['ir_se50']))
- self.face_pool = torch.nn.AdaptiveAvgPool2d((112, 112))
- self.facenet.eval()
- for module in [self.facenet, self.face_pool]:
- for param in module.parameters():
- param.requires_grad = False
-
- def extract_feats(self, x):
- x = x[:, :, 35:223, 32:220] # Crop interesting region
- x = self.face_pool(x)
- x_feats = self.facenet(x)
- return x_feats
-
- def forward(self, y_hat, y, x):
- n_samples = x.shape[0]
- x_feats = self.extract_feats(x)
- y_feats = self.extract_feats(y) # Otherwise use the feature from there
- y_hat_feats = self.extract_feats(y_hat)
- y_feats = y_feats.detach()
- loss = 0
- sim_improvement = 0
- id_logs = []
- count = 0
- for i in range(n_samples):
- diff_target = y_hat_feats[i].dot(y_feats[i])
- diff_input = y_hat_feats[i].dot(x_feats[i])
- diff_views = y_feats[i].dot(x_feats[i])
- id_logs.append({'diff_target': float(diff_target),
- 'diff_input': float(diff_input),
- 'diff_views': float(diff_views)})
- loss += 1 - diff_target
- id_diff = float(diff_target) - float(diff_views)
- sim_improvement += id_diff
- count += 1
-
- return loss / count, sim_improvement / count, id_logs
diff --git a/spaces/akhaliq/T0pp/app.py b/spaces/akhaliq/T0pp/app.py
deleted file mode 100644
index 79cf4a8ca8020487a034253c2736ae102249355e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/T0pp/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-title = "T0pp"
-description = "Gradio Demo for T0pp, T0* is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. Can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. Read more at the links below."
-article = "
"
-examples = [
- ['Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy'],["It's rainy today but it will stop in a few hours, when should I go for my run?"],["How many hydrogen atoms are in a water molecule?"]
-]
-gr.Interface.load("huggingface/bigscience/T0pp", inputs=gr.inputs.Textbox(lines=5, label="Input Text"),title=title,description=description,article=article, examples=examples,enable_queue=True).launch()
\ No newline at end of file
diff --git a/spaces/akhaliq/Text-to-Music/README.md b/spaces/akhaliq/Text-to-Music/README.md
deleted file mode 100644
index a4e4d994277b0ddf86f6bf76c9149a2632024d8b..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Text-to-Music/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Text To Music
-emoji: ⚡
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: unknown
-duplicated_from: Mubert/Text-to-Music
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/deeplab2/g3doc/setup/coco.md b/spaces/akhaliq/deeplab2/g3doc/setup/coco.md
deleted file mode 100644
index 0d6884493ae5001188f94ab0747bf20c8622ee08..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/g3doc/setup/coco.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# Run DeepLab2 on COCO dataset
-
-This page walks through the steps required to generate
-[COCO](https://cocodataset.org/) panoptic segmentation data for DeepLab2.
-DeepLab2 uses sharded TFRecords for efficient processing of the data.
-
-## Prework
-
-Before running any Deeplab2 scripts, the users should (1) access the
-[COCO dataset website](https://cocodataset.org/) to download the dataset,
-including [2017 Train images](http://images.cocodataset.org/zips/train2017.zip),
-[2017 Val images](http://images.cocodataset.org/zips/val2017.zip),
-[2017 Test images](http://images.cocodataset.org/zips/test2017.zip), and
-[2017 Panoptic Train/Val annotations](http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip),
-and (2) unzip the downloaded files.
-
-After finishing above steps, the expected directory structure should be as
-follows:
-
-```
-.(COCO_ROOT)
-+-- train2017
-| |
-| +-- *.jpg
-|
-|-- val2017
-| |
-| +-- *.jpg
-|
-|-- test2017
-| |
-| +-- *.jpg
-|
-+-- annotations
- |
- +-- panoptic_{train|val}2017.json
- +-- panoptic_{train|val}2017
-```
-
-## Convert prepared dataset to TFRecord
-
-Use the following commandline to generate COCO TFRecords:
-
-```bash
-# For generating data for panoptic segmentation task
-python deeplab2/data/build_coco_data.py \
- --coco_root=${COCO_ROOT} \
- --output_dir=${OUTPUT_DIR}
-```
-
-Commandline above will output three sharded tfrecord files:
-`{train|val|test}@1000.tfrecord`. In the tfrecords, for `train` and `val` set,
-it contains the RGB image pixels as well as corresponding annotations. For
-`test` set, it contains RGB images only. These files will be used as the input
-for the model training and evaluation.
-
-Note that we map the class ID to continuous IDs. Specifically, we map the
-original label ID, which ranges from 1 to 200, to the contiguous ones ranging
-from 1 to 133.
-
-### TFExample proto format for COCO
-
-The Example proto contains the following fields:
-
-* `image/encoded`: encoded image content.
-* `image/filename`: image filename.
-* `image/format`: image file format.
-* `image/height`: image height.
-* `image/width`: image width.
-* `image/channels`: image channels.
-* `image/segmentation/class/encoded`: encoded segmentation content.
-* `image/segmentation/class/format`: segmentation encoding format.
-
-For panoptic segmentation, the encoded segmentation map will be the raw bytes of
-an int32 panoptic map, where each pixel is assigned to a panoptic ID, which is
-computed by:
-
-```
- panoptic ID = semantic ID * label divisor + instance ID
-```
-
-where semantic ID will be:
-
-* ignore label (0) for pixels not belonging to any segment
-* for segments associated with `iscrowd` label:
- * (default): ignore label (0)
- * (if set `--treat_crowd_as_ignore=false` while running
- `build_coco_data.py`): `category_id`
-* `category_id` for other segments
-
-The instance ID will be 0 for pixels belonging to
-
-* `stuff` class
-* `thing` class with `iscrowd` label
-* pixels with ignore label
-
-and `[1, label divisor)` otherwise.
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py
deleted file mode 100644
index bdaf4033c9364f3513f0d6bade7892fd6ae35128..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import logging
-from typing import Iterable, Set, Tuple
-
-from pip._internal.build_env import BuildEnvironment
-from pip._internal.distributions.base import AbstractDistribution
-from pip._internal.exceptions import InstallationError
-from pip._internal.index.package_finder import PackageFinder
-from pip._internal.metadata import BaseDistribution
-from pip._internal.utils.subprocess import runner_with_spinner_message
-
-logger = logging.getLogger(__name__)
-
-
-class SourceDistribution(AbstractDistribution):
- """Represents a source distribution.
-
- The preparation step for these needs metadata for the packages to be
- generated, either using PEP 517 or using the legacy `setup.py egg_info`.
- """
-
- def get_metadata_distribution(self) -> BaseDistribution:
- return self.req.get_dist()
-
- def prepare_distribution_metadata(
- self, finder: PackageFinder, build_isolation: bool
- ) -> None:
- # Load pyproject.toml, to determine whether PEP 517 is to be used
- self.req.load_pyproject_toml()
-
- # Set up the build isolation, if this requirement should be isolated
- should_isolate = self.req.use_pep517 and build_isolation
- if should_isolate:
- # Setup an isolated environment and install the build backend static
- # requirements in it.
- self._prepare_build_backend(finder)
- # Check that if the requirement is editable, it either supports PEP 660 or
- # has a setup.py or a setup.cfg. This cannot be done earlier because we need
- # to setup the build backend to verify it supports build_editable, nor can
- # it be done later, because we want to avoid installing build requirements
- # needlessly. Doing it here also works around setuptools generating
- # UNKNOWN.egg-info when running get_requires_for_build_wheel on a directory
- # without setup.py nor setup.cfg.
- self.req.isolated_editable_sanity_check()
- # Install the dynamic build requirements.
- self._install_build_reqs(finder)
-
- self.req.prepare_metadata()
-
- def _prepare_build_backend(self, finder: PackageFinder) -> None:
- # Isolate in a BuildEnvironment and install the build-time
- # requirements.
- pyproject_requires = self.req.pyproject_requires
- assert pyproject_requires is not None
-
- self.req.build_env = BuildEnvironment()
- self.req.build_env.install_requirements(
- finder, pyproject_requires, "overlay", kind="build dependencies"
- )
- conflicting, missing = self.req.build_env.check_requirements(
- self.req.requirements_to_check
- )
- if conflicting:
- self._raise_conflicts("PEP 517/518 supported requirements", conflicting)
- if missing:
- logger.warning(
- "Missing build requirements in pyproject.toml for %s.",
- self.req,
- )
- logger.warning(
- "The project does not specify a build backend, and "
- "pip cannot fall back to setuptools without %s.",
- " and ".join(map(repr, sorted(missing))),
- )
-
- def _get_build_requires_wheel(self) -> Iterable[str]:
- with self.req.build_env:
- runner = runner_with_spinner_message("Getting requirements to build wheel")
- backend = self.req.pep517_backend
- assert backend is not None
- with backend.subprocess_runner(runner):
- return backend.get_requires_for_build_wheel()
-
- def _get_build_requires_editable(self) -> Iterable[str]:
- with self.req.build_env:
- runner = runner_with_spinner_message(
- "Getting requirements to build editable"
- )
- backend = self.req.pep517_backend
- assert backend is not None
- with backend.subprocess_runner(runner):
- return backend.get_requires_for_build_editable()
-
- def _install_build_reqs(self, finder: PackageFinder) -> None:
- # Install any extra build dependencies that the backend requests.
- # This must be done in a second pass, as the pyproject.toml
- # dependencies must be installed before we can call the backend.
- if (
- self.req.editable
- and self.req.permit_editable_wheels
- and self.req.supports_pyproject_editable()
- ):
- build_reqs = self._get_build_requires_editable()
- else:
- build_reqs = self._get_build_requires_wheel()
- conflicting, missing = self.req.build_env.check_requirements(build_reqs)
- if conflicting:
- self._raise_conflicts("the backend dependencies", conflicting)
- self.req.build_env.install_requirements(
- finder, missing, "normal", kind="backend dependencies"
- )
-
- def _raise_conflicts(
- self, conflicting_with: str, conflicting_reqs: Set[Tuple[str, str]]
- ) -> None:
- format_string = (
- "Some build dependencies for {requirement} "
- "conflict with {conflicting_with}: {description}."
- )
- error_message = format_string.format(
- requirement=self.req,
- conflicting_with=conflicting_with,
- description=", ".join(
- f"{installed} is incompatible with {wanted}"
- for installed, wanted in sorted(conflicting_reqs)
- ),
- )
- raise InstallationError(error_message)
diff --git a/spaces/ali-ghamdan/deoldify/fastai/imports/torch.py b/spaces/ali-ghamdan/deoldify/fastai/imports/torch.py
deleted file mode 100644
index 028a3932fb7c12356a0ab239098cd8088308d37f..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/deoldify/fastai/imports/torch.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import torch, torch.nn.functional as F
-from torch import ByteTensor, DoubleTensor, FloatTensor, HalfTensor, LongTensor, ShortTensor, Tensor
-from torch import nn, optim, as_tensor
-from torch.utils.data import BatchSampler, DataLoader, Dataset, Sampler, TensorDataset
-from torch.nn.utils import weight_norm, spectral_norm
diff --git a/spaces/aliabid94/AutoGPT/README.md b/spaces/aliabid94/AutoGPT/README.md
deleted file mode 100644
index b3d2b2ddb23ae71650a9570465d445321b6d5559..0000000000000000000000000000000000000000
--- a/spaces/aliabid94/AutoGPT/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: AutoGPT
-emoji: 🦾
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.27.0
-app_file: ui/app.py
-pinned: false
-license: mit
----
-
diff --git a/spaces/aliabid94/AutoGPT/autogpt/spinner.py b/spaces/aliabid94/AutoGPT/autogpt/spinner.py
deleted file mode 100644
index 4e33d74213881352546f334ccb1eb4772b8b7b70..0000000000000000000000000000000000000000
--- a/spaces/aliabid94/AutoGPT/autogpt/spinner.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""A simple spinner module"""
-import itertools
-import sys
-import threading
-import time
-
-
-class Spinner:
- """A simple spinner class"""
-
- def __init__(self, message: str = "Loading...", delay: float = 0.1) -> None:
- """Initialize the spinner class
-
- Args:
- message (str): The message to display.
- delay (float): The delay between each spinner update.
- """
- self.spinner = itertools.cycle(["-", "/", "|", "\\"])
- self.delay = delay
- self.message = message
- self.running = False
- self.spinner_thread = None
-
- def spin(self) -> None:
- """Spin the spinner"""
- while self.running:
- sys.stdout.write(f"{next(self.spinner)} {self.message}\r")
- sys.stdout.flush()
- time.sleep(self.delay)
- sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r")
-
- def __enter__(self):
- """Start the spinner"""
- self.running = True
- self.spinner_thread = threading.Thread(target=self.spin)
- self.spinner_thread.start()
-
- return self
-
- def __exit__(self, exc_type, exc_value, exc_traceback) -> None:
- """Stop the spinner
-
- Args:
- exc_type (Exception): The exception type.
- exc_value (Exception): The exception value.
- exc_traceback (Exception): The exception traceback.
- """
- self.running = False
- if self.spinner_thread is not None:
- self.spinner_thread.join()
- sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r")
- sys.stdout.flush()
-
- def update_message(self, new_message, delay=0.1):
- """Update the spinner message
- Args:
- new_message (str): New message to display
- delay: Delay in seconds before updating the message
- """
- time.sleep(delay)
- sys.stdout.write(
- f"\r{' ' * (len(self.message) + 2)}\r"
- ) # Clear the current message
- sys.stdout.flush()
- self.message = new_message
diff --git a/spaces/arslan-ahmed/talk-to-your-docs/Dockerfile b/spaces/arslan-ahmed/talk-to-your-docs/Dockerfile
deleted file mode 100644
index 2ae720c2805ac5095c21ce52e71257f3cdd284d4..0000000000000000000000000000000000000000
--- a/spaces/arslan-ahmed/talk-to-your-docs/Dockerfile
+++ /dev/null
@@ -1,19 +0,0 @@
-
-# Use an official Python runtime as a parent image
-FROM arslan2k12/ttyd_base
-
-# Set the working directory in the container
-WORKDIR /app/ttyd
-
-# Copy the current directory contents into the container at /usr/src/app
-# COPY . /app/ttyd
-COPY *.py /app/ttyd
-
-# to make gradio app accessible to local network (default 127.0.0.1 is only accissible within the container)
-ENV GRADIO_SERVER_NAME=0.0.0.0
-
-# Install any needed packages specified in requirements.txt
-# RUN pip install --no-cache-dir -r requirements.txt # already installed in base image
-
-# Use ENTRYPOINT to allow passing user arguments
-ENTRYPOINT ["python", "app.py"]
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/neuralhmm_tts/train_neuralhmmtts.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/neuralhmm_tts/train_neuralhmmtts.py
deleted file mode 100644
index 28d37799750b7115be9a24c4a947526fed9429fe..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/neuralhmm_tts/train_neuralhmmtts.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import os
-
-from trainer import Trainer, TrainerArgs
-
-from TTS.config.shared_configs import BaseAudioConfig
-from TTS.tts.configs.neuralhmm_tts_config import NeuralhmmTTSConfig
-from TTS.tts.configs.shared_configs import BaseDatasetConfig
-from TTS.tts.datasets import load_tts_samples
-from TTS.tts.models.neuralhmm_tts import NeuralhmmTTS
-from TTS.tts.utils.text.tokenizer import TTSTokenizer
-from TTS.utils.audio import AudioProcessor
-
-output_path = os.path.dirname(os.path.abspath(__file__))
-
-# init configs
-dataset_config = BaseDatasetConfig(
- formatter="ljspeech", meta_file_train="metadata.csv", path=os.path.join("data", "LJSpeech-1.1/")
-)
-
-audio_config = BaseAudioConfig(
- sample_rate=22050,
- do_trim_silence=True,
- trim_db=60.0,
- signal_norm=False,
- mel_fmin=0.0,
- mel_fmax=8000,
- spec_gain=1.0,
- log_func="np.log",
- ref_level_db=20,
- preemphasis=0.0,
-)
-
-config = NeuralhmmTTSConfig( # This is the config that is saved for the future use
- run_name="neuralhmmtts_ljspeech",
- audio=audio_config,
- batch_size=32,
- eval_batch_size=16,
- num_loader_workers=4,
- num_eval_loader_workers=4,
- run_eval=True,
- test_delay_epochs=-1,
- epochs=1000,
- text_cleaner="phoneme_cleaners",
- use_phonemes=True,
- phoneme_language="en-us",
- phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
- precompute_num_workers=8,
- mel_statistics_parameter_path=os.path.join(output_path, "lj_parameters.pt"),
- force_generate_statistics=False,
- print_step=1,
- print_eval=True,
- mixed_precision=True,
- output_path=output_path,
- datasets=[dataset_config],
-)
-
-# INITIALIZE THE AUDIO PROCESSOR
-# Audio processor is used for feature extraction and audio I/O.
-# It mainly serves to the dataloader and the training loggers.
-ap = AudioProcessor.init_from_config(config)
-
-# INITIALIZE THE TOKENIZER
-# Tokenizer is used to convert text to sequences of token IDs.
-# If characters are not defined in the config, default characters are passed to the config
-tokenizer, config = TTSTokenizer.init_from_config(config)
-
-# LOAD DATA SAMPLES
-# Each sample is a list of ```[text, audio_file_path, speaker_name]```
-# You can define your custom sample loader returning the list of samples.
-# Or define your custom formatter and pass it to the `load_tts_samples`.
-# Check `TTS.tts.datasets.load_tts_samples` for more details.
-train_samples, eval_samples = load_tts_samples(
- dataset_config,
- eval_split=True,
- eval_split_max_size=config.eval_split_max_size,
- eval_split_size=config.eval_split_size,
-)
-
-# INITIALIZE THE MODEL
-# Models take a config object and a speaker manager as input
-# Config defines the details of the model like the number of layers, the size of the embedding, etc.
-# Speaker manager is used by multi-speaker models.
-model = NeuralhmmTTS(config, ap, tokenizer)
-
-
-# init the trainer and 🚀
-trainer = Trainer(
- TrainerArgs(),
- config,
- output_path,
- model=model,
- train_samples=train_samples,
- eval_samples=eval_samples,
- gpu=1,
-)
-trainer.fit()
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/MD5.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/MD5.py
deleted file mode 100644
index 554b77720fa10ab12ec18d4657b9b9c087676d48..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/MD5.py
+++ /dev/null
@@ -1,184 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-from Crypto.Util.py3compat import *
-
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer,
- create_string_buffer,
- get_raw_buffer, c_size_t,
- c_uint8_ptr)
-
-_raw_md5_lib = load_pycryptodome_raw_lib("Crypto.Hash._MD5",
- """
- #define MD5_DIGEST_SIZE 16
-
- int MD5_init(void **shaState);
- int MD5_destroy(void *shaState);
- int MD5_update(void *hs,
- const uint8_t *buf,
- size_t len);
- int MD5_digest(const void *shaState,
- uint8_t digest[MD5_DIGEST_SIZE]);
- int MD5_copy(const void *src, void *dst);
-
- int MD5_pbkdf2_hmac_assist(const void *inner,
- const void *outer,
- const uint8_t first_digest[MD5_DIGEST_SIZE],
- uint8_t final_digest[MD5_DIGEST_SIZE],
- size_t iterations);
- """)
-
-class MD5Hash(object):
- """A MD5 hash object.
- Do not instantiate directly.
- Use the :func:`new` function.
-
- :ivar oid: ASN.1 Object ID
- :vartype oid: string
-
- :ivar block_size: the size in bytes of the internal message block,
- input to the compression function
- :vartype block_size: integer
-
- :ivar digest_size: the size in bytes of the resulting hash
- :vartype digest_size: integer
- """
-
- # The size of the resulting hash in bytes.
- digest_size = 16
- # The internal block size of the hash algorithm in bytes.
- block_size = 64
- # ASN.1 Object ID
- oid = "1.2.840.113549.2.5"
-
- def __init__(self, data=None):
- state = VoidPointer()
- result = _raw_md5_lib.MD5_init(state.address_of())
- if result:
- raise ValueError("Error %d while instantiating MD5"
- % result)
- self._state = SmartPointer(state.get(),
- _raw_md5_lib.MD5_destroy)
- if data:
- self.update(data)
-
- def update(self, data):
- """Continue hashing of a message by consuming the next chunk of data.
-
- Args:
- data (byte string/byte array/memoryview): The next chunk of the message being hashed.
- """
-
- result = _raw_md5_lib.MD5_update(self._state.get(),
- c_uint8_ptr(data),
- c_size_t(len(data)))
- if result:
- raise ValueError("Error %d while instantiating MD5"
- % result)
-
- def digest(self):
- """Return the **binary** (non-printable) digest of the message that has been hashed so far.
-
- :return: The hash digest, computed over the data processed so far.
- Binary form.
- :rtype: byte string
- """
-
- bfr = create_string_buffer(self.digest_size)
- result = _raw_md5_lib.MD5_digest(self._state.get(),
- bfr)
- if result:
- raise ValueError("Error %d while instantiating MD5"
- % result)
-
- return get_raw_buffer(bfr)
-
- def hexdigest(self):
- """Return the **printable** digest of the message that has been hashed so far.
-
- :return: The hash digest, computed over the data processed so far.
- Hexadecimal encoded.
- :rtype: string
- """
-
- return "".join(["%02x" % bord(x) for x in self.digest()])
-
- def copy(self):
- """Return a copy ("clone") of the hash object.
-
- The copy will have the same internal state as the original hash
- object.
- This can be used to efficiently compute the digests of strings that
- share a common initial substring.
-
- :return: A hash object of the same type
- """
-
- clone = MD5Hash()
- result = _raw_md5_lib.MD5_copy(self._state.get(),
- clone._state.get())
- if result:
- raise ValueError("Error %d while copying MD5" % result)
- return clone
-
- def new(self, data=None):
- """Create a fresh SHA-1 hash object."""
-
- return MD5Hash(data)
-
-
-def new(data=None):
- """Create a new hash object.
-
- :parameter data:
- Optional. The very first chunk of the message to hash.
- It is equivalent to an early call to :meth:`MD5Hash.update`.
- :type data: byte string/byte array/memoryview
-
- :Return: A :class:`MD5Hash` hash object
- """
- return MD5Hash().new(data)
-
-# The size of the resulting hash in bytes.
-digest_size = 16
-
-# The internal block size of the hash algorithm in bytes.
-block_size = 64
-
-
-def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations):
- """Compute the expensive inner loop in PBKDF-HMAC."""
-
- assert len(first_digest) == digest_size
- assert iterations > 0
-
- bfr = create_string_buffer(digest_size);
- result = _raw_md5_lib.MD5_pbkdf2_hmac_assist(
- inner._state.get(),
- outer._state.get(),
- first_digest,
- bfr,
- c_size_t(iterations))
-
- if result:
- raise ValueError("Error %d with PBKDF2-HMAC assis for MD5" % result)
-
- return get_raw_buffer(bfr)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/plasma_utils.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/plasma_utils.py
deleted file mode 100644
index 459fb8acd789e7b03c70201cb5cb2a9e7dc4f325..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/plasma_utils.py
+++ /dev/null
@@ -1,197 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import hashlib
-import json
-import subprocess
-import tempfile
-from typing import Hashable
-
-try:
- import pyarrow.plasma as plasma
-
- PYARROW_AVAILABLE = True
-except ImportError:
- plasma = None
- PYARROW_AVAILABLE = False
-
-
-class PlasmaArray:
- """
- Wrapper around numpy arrays that automatically moves the data to shared
- memory upon serialization. This is particularly helpful when passing numpy
- arrays through multiprocessing, so that data is not unnecessarily
- duplicated or pickled.
- """
-
- def __init__(self, array):
- super().__init__()
- self.array = array
- self.disable = array.nbytes < 134217728 # disable for arrays <128MB
- self.object_id = None
- self.path = None
-
- # variables with underscores shouldn't be pickled
- self._client = None
- self._server = None
- self._server_tmp = None
- self._plasma = None
-
- @property
- def plasma(self):
- if self._plasma is None and not self.disable:
- self._plasma = plasma
- return self._plasma
-
- def start_server(self):
- if self.plasma is None or self._server is not None:
- return
- assert self.object_id is None
- assert self.path is None
- self._server_tmp = tempfile.NamedTemporaryFile()
- self.path = self._server_tmp.name
- self._server = subprocess.Popen(
- ["plasma_store", "-m", str(int(1.05 * self.array.nbytes)), "-s", self.path]
- )
-
- @property
- def client(self):
- if self._client is None:
- assert self.path is not None
- self._client = self.plasma.connect(self.path, num_retries=200)
- return self._client
-
- def __getstate__(self):
- """Called on pickle load"""
- if self.plasma is None:
- return self.__dict__
- if self.object_id is None:
- self.start_server()
- self.object_id = self.client.put(self.array)
- state = self.__dict__.copy()
- del state["array"]
- state["_client"] = None
- state["_server"] = None
- state["_server_tmp"] = None
- state["_plasma"] = None
- return state
-
- def __setstate__(self, state):
- """Called on pickle save"""
- self.__dict__.update(state)
- if self.plasma is None:
- return
- self.array = self.client.get(self.object_id)
-
- def __del__(self):
- if self._server is not None:
- self._server.kill()
- self._server = None
- self._server_tmp.close()
- self._server_tmp = None
-
-
-DEFAULT_PLASMA_PATH = "/tmp/plasma"
-
-
-class PlasmaView:
- """Interface to write and read from shared memory. Whereas PlasmaArray writes to plasma on serialization,
- PlasmaView writes to shared memory on instantiation."""
-
- def __init__(self, array, split_path: str, hash_data: Hashable, plasma_path=None):
- """
- Args:
- array: numpy array to store. This can be read with ``PlasmaView().array``
- split_path: the path whence the data was read, used for hashing
- hash_data: other metadata about the array that can be used to create a unique key.
- as of writing, the 3 callers in ``TokenBlockDataset`` use::
-
- hash_data = ((block_size, document_sep_len, str(break_mode), len(dataset)), 0|1|2)
-
-
- """
- assert PYARROW_AVAILABLE
- assert split_path is not None
- if plasma_path is None:
- plasma_path = DEFAULT_PLASMA_PATH
-
- self.path = plasma_path
- self.split_path = split_path
- self._client = None # Initialize lazily for pickle. plasma clients should not be deep copied or serialized.
- self._n = None
-
- self.object_id = self.get_object_id(self.split_path, hash_data)
- try:
- self.client.put(array, object_id=self.object_id)
- except plasma.PlasmaObjectExists:
- pass
-
- @property
- def client(self):
- if self._client is None:
- self._client = plasma.connect(self.path, num_retries=200)
- return self._client
-
- @property
- def array(self):
- """Fetch a read only view of an np.array, stored in plasma."""
- ret = self.client.get(self.object_id)
- return ret
-
- @staticmethod
- def get_object_id(split_path: str, hash_data: Hashable):
- """Returns plasma.ObjectID from hashing split_path and object_num."""
- hash = hashlib.blake2b(bytes(split_path, "utf-8"), digest_size=20)
- harg = json.dumps(hash_data).encode("utf-8")
- hash.update(harg)
- return plasma.ObjectID(hash.digest())
-
- def __getstate__(self):
- """Called on pickle save"""
- self.disconnect()
- state = self.__dict__.copy()
- assert state["_client"] is None
- assert "object_id" in state
- return state
-
- def __setstate__(self, state):
- """Called on pickle load"""
- self.__dict__.update(state)
-
- def __del__(self):
- self.disconnect()
-
- def disconnect(self):
- if self._client is not None:
- self._client.disconnect()
- self._client = None
-
- def __len__(self):
- """Save reads by caching len"""
- if self._n is None:
- self._n = len(self.array)
- return self._n
-
-
-GB100 = (1024**3) * 100
-
-
-class PlasmaStore:
- def __init__(self, path=DEFAULT_PLASMA_PATH, nbytes: int = GB100):
-
- self.server = self.start(path, nbytes)
-
- def __del__(self):
- self.server.kill()
-
- @staticmethod
- def start(path=DEFAULT_PLASMA_PATH, nbytes: int = GB100) -> subprocess.Popen:
- if not PYARROW_AVAILABLE:
- raise ImportError("please run pip install pyarrow to use --use_plasma_view")
- # best practice is to allocate more space than we need. The limitation seems to be the size of /dev/shm
- _server = subprocess.Popen(["plasma_store", "-m", str(nbytes), "-s", path])
- plasma.connect(path, num_retries=200) # If we can't connect we fail immediately
- return _server
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/tasks/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/tasks/__init__.py
deleted file mode 100644
index e2e9323a530672ef9daecd793ef645a3c1d0f3e6..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/tasks/__init__.py
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-from .task import *
-from .vlmtask import *
-from .retritask import *
-
-try:
- from .fairseqmmtask import *
-except ImportError:
- pass
-
-try:
- from .milncetask import *
-except ImportError:
- pass
-
-try:
- from .expretritask import *
-except ImportError:
- pass
diff --git a/spaces/awacke1/DockerGoFlanT5/static/index.html b/spaces/awacke1/DockerGoFlanT5/static/index.html
deleted file mode 100644
index 876c6fb45bb9cfc4f4d2638651783d084c7bbdb0..0000000000000000000000000000000000000000
--- a/spaces/awacke1/DockerGoFlanT5/static/index.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-
-
-
- Docker Space Fast API Uvicorn
-
-
-
-
-
-
-
And speaking of which, with this awesome mobile application of ESET Mobile Security & Antivirus, Android users can now enjoy working freely with their Android devices without worrying about their security. With the app constantly working in the background to get rid of all security threats and prevent potential exploitations, you can have complete peace of mind.
-
Scan to make sure that no one is using your network and improve your Wi-Fi network security. Protect your mobile devices from adware and malware from unsecured apps. Schedule system checks and enable background app instances to always protect your devices. The list goes on.
For those of you who are interested, you can now enjoy working with this amazing mobile application of ESET Mobile Security & Antivirus and enjoy all of its amazing features for free, thanks to the featured app on the Google Play Store. However, the freemium app will still come with certain in-app purchases, which are required if you want to unlock its complete features.
-
With ESET Mobile Security & Antivirus, Android users can immediately enjoy the intuitive and accessible Android app on their mobile devices, thanks to the improved dashboard UX and friendly interfaces which make it super easy for mobile users to interact with. Here, Android users are free to navigate through the app and make use of its undemanding features.
-
For those of you who are interested, you can now enable your real-time scanning of ESET Mobile Security & Antivirus by providing the app with certain access permissions. This will allow the mobile security tool to consistently enable its antivirus scans to protect your mobile devices on the go. Here, the active scans will make sure that your devices are always free of virus in real time.
-
And to make the app more fun and interesting, ESET Mobile Security & Antivirus users can now choose to scan and check for the connected USB devices to ensure their safety. With the On-The-Go USB Scanner, you can make sure that your USB devices are safe by testing them on your mobile devices first before plugging them on other systems.
-
With online transactions becoming more and more popular, mobile users are finding their mobile devices being more and more useful when it comes to executing these transactions. However, this also allows cyber attackers to exploit your online payment methods to steal your money. With Payment Protection from ESET Mobile Security & Antivirus, each of your transactions will be properly verified by the app to make sure that you can shop and bank safely while online.
-
-
With the Anti-Phishing option available, ESET Mobile Security & Antivirus users can now identify scam sites while browsing the web pages with their mobile devices. Here, the app will actively scan your online activities and notify you when accessing certain unsecured web pages. Plus, it will also improve your security when using social apps, which will prevent most hackers from collecting any of your information.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Hard Disk Sentinel Pro 5.20 Build 9372 Portable Free Download UPDATED.md b/spaces/bioriAsaeru/text-to-voice/Hard Disk Sentinel Pro 5.20 Build 9372 Portable Free Download UPDATED.md
deleted file mode 100644
index 9eb257a39ea801e69871281fc8476e28eac6d9dc..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Hard Disk Sentinel Pro 5.20 Build 9372 Portable Free Download UPDATED.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
Hard Disk Sentinel Pro 5.20 Build 9372 Portable free download
-
-TНИЦДАЛЬНОЕ ПРЕДЫРЕНЕНИЕ.
-
-Hard Disk Sentinel 3.1.0.0 Crack 2020 Free Download is a Disk utility, by the software developer Techuva that can recognize, verify and locate disk errors and other disk related problems. Hard Disk Sentinel is a software program that uses self-learning AI (Artificial Intelligence).
-
-It scans for bad sectors, bad cylinders, physical defects and hard disk drive problems. It also checks for problems when hard disks are shut down. It is easy to use disk error scanning software, there are two languages: English, French and Spanish. It is a small portable program, which can be installed on an USB flash drive. It can be used on an entire hard disk or even a single partition.
-
-Download Hard Disk Sentinel, Professional, Trial, DOS, Linux versions.
-
-Features:
-
-New Scan Modes:
-
-Hard Disk Sentinel 2020 Crack uses a new scan mode with the new function of 32 bit/64 bit. It can analyze the errors that appear in the 32-bit and 64-bit versions of Windows, both 32-bit and 64-bit Windows.
-
-Faster Scan Speed:
-
-The disk file scanning software has a faster scan speed, making it easier for the software to recognize the errors.
-
-Automatic Scanning:
-
-The Hard Disk Sentinel 2020 Crack can automatically scan for any disk problems, making it easier for the user.
-
-Windows x64 System Requirements:
-
-Hard Disk Sentinel needs a processor speed of 2.66 GHz or above
-
-Windows 10, 8.1, 8 or 7 or 6 SP1
-
-1.7 GB RAM or more
-
-Hard Disk Sentinel 2.0.0.0 Free Keygen Is now here and you can free download it.
-
-Hard Disk Sentinel 2.0.0.0 Crack is a disk utility that can recognize, verify and locate disk errors and other disk related problems. Hard Disk Sentinel is a software program that uses self-learning AI (Artificial Intelligence). It is a small portable program, which can be installed on an USB flash drive. It is easy to use disk error scanning software, there are two languages: English, French and Spanish. It is a disk file scanning software that can automatically scan for any disk problems. 4fefd39f24
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Klinicka Farmakologija Knjiga Pdf Download PATCHED.md b/spaces/bioriAsaeru/text-to-voice/Klinicka Farmakologija Knjiga Pdf Download PATCHED.md
deleted file mode 100644
index 3e675f50df01567be127ba129d99b6a5fb5a225e..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Klinicka Farmakologija Knjiga Pdf Download PATCHED.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
Klinicka Farmakologija Knjiga Pdf Download: A Guide to Finding and Reading Pharmacology Books in Serbian
-
Klinicka farmakologija knjiga pdf download is a phrase that means "clinical pharmacology book pdf download" in Serbian. Clinical pharmacology is the branch of medicine that studies the effects, interactions, and uses of drugs in humans. If you are interested in learning more about this topic, you may want to find and read some pharmacology books in Serbian.
However, finding and reading pharmacology books in Serbian may not be easy for everyone. You may face some challenges such as:
-
-
Limited availability of pharmacology books in Serbian online or offline
-
High cost of buying or accessing pharmacology books in Serbian
-
Difficulty in understanding the technical terms and concepts in pharmacology books in Serbian
-
Lack of reliable sources or references for pharmacology books in Serbian
-
-
To help you overcome these challenges, we have prepared this guide to finding and reading pharmacology books in Serbian. We will provide you with some tips and resources that will make your search and reading easier and more enjoyable.
-
How to Find Pharmacology Books in Serbian Online
-
One of the easiest ways to find pharmacology books in Serbian online is to use a search engine such as Google or Bing. You can type in the keyword "klinicka farmakologija knjiga pdf download" or other related terms such as "farmakologija knjiga pdf", "farmakologija knjiga online", or "farmakologija knjiga besplatno". You can also add the name of a specific author or book title if you have one in mind.
-
However, not all the results that you will get from the search engine will be relevant or reliable. You may encounter some problems such as:
-
-
Broken links or expired downloads
-
Low-quality or incomplete pdf files
-
Irrelevant or outdated content
-
Malware or viruses attached to the files
-
Illegal or unethical distribution of copyrighted material
-
-
To avoid these problems, you should be careful and selective when choosing which websites to visit and which files to download. You should look for some indicators of credibility and quality such as:
-
-
The domain name and extension of the website (e.g., .edu, .org, .gov)
-
The design and layout of the website (e.g., professional, user-friendly, updated)
-
The authorship and affiliation of the content (e.g., name, credentials, institution)
-
The date and source of publication of the content (e.g., year, publisher, journal)
-
The reviews and ratings of the content (e.g., comments, feedback, stars)
-
The availability and accessibility of the content (e.g., free, open access, registration required)
-
-
Some examples of websites that offer pharmacology books in Serbian online are:
-
-- [^1^] Scribd: A digital library that hosts millions of books, documents, audiobooks, podcasts, and magazines. You can find some pharmacology books in Serbian on Scribd such as Farmakologija - Rang[^1^], a free ebook that covers various topics in pharmacology such as drug receptors, drug metabolism, drug toxicity, drug therapy, and drug development. You can read it online or download it as a pdf file after signing up for a free trial or a subscription.
-- [^2^] Data Status: A publishing house that specializes in medical and scientific books. You can find some pharmacology books in Serbian on Data Status such as Temeljna i kliniÄka farmakologija[^2^], a comprehensive textbook that covers the basic principles and clinical applications of pharmacology. You can buy it online as a hardcover book or an ebook.
-- [^3^] Sway: A Microsoft Office app that allows you to create and share interactive presentations. You can find some pharmacology books in Serbian on Sway such as Klinicka Farmakologija Knjiga Pdf Download[^3^], a presentation that provides a brief d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/kernel_inception_distance.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/kernel_inception_distance.py
deleted file mode 100644
index 3ac978925b5cf810463ef8e8a6f0dcd3f9078e6d..0000000000000000000000000000000000000000
--- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/kernel_inception_distance.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Kernel Inception Distance (KID) from the paper "Demystifying MMD
-GANs". Matches the original implementation by Binkowski et al. at
-https://github.com/mbinkowski/MMD-GAN/blob/master/gan/compute_scores.py"""
-
-import numpy as np
-from . import metric_utils
-
-#----------------------------------------------------------------------------
-
-def compute_kid(opts, max_real, num_gen, num_subsets, max_subset_size):
- # Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz
- detector_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt'
- detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer.
-
- real_features = metric_utils.compute_feature_stats_for_dataset(
- opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs,
- rel_lo=0, rel_hi=0, capture_all=True, max_items=max_real).get_all()
-
- gen_features = metric_utils.compute_feature_stats_for_generator(
- opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs,
- rel_lo=0, rel_hi=1, capture_all=True, max_items=num_gen).get_all()
-
- if opts.rank != 0:
- return float('nan')
-
- n = real_features.shape[1]
- m = min(min(real_features.shape[0], gen_features.shape[0]), max_subset_size)
- t = 0
- for _subset_idx in range(num_subsets):
- x = gen_features[np.random.choice(gen_features.shape[0], m, replace=False)]
- y = real_features[np.random.choice(real_features.shape[0], m, replace=False)]
- a = (x @ x.T / n + 1) ** 3 + (y @ y.T / n + 1) ** 3
- b = (x @ y.T / n + 1) ** 3
- t += (a.sum() - np.diag(a).sum()) / (m - 1) - b.sum() * 2 / m
- kid = t / num_subsets / m
- return float(kid)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/bkhmsi/Font-To-Sketch/code/collage.py b/spaces/bkhmsi/Font-To-Sketch/code/collage.py
deleted file mode 100644
index 3b1d7943d5bb3e18c74e3286c19f4a69069be2bd..0000000000000000000000000000000000000000
--- a/spaces/bkhmsi/Font-To-Sketch/code/collage.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import os
-import imageio
-import numpy as np
-from glob import glob
-from tqdm import tqdm
-from PIL import Image
-
-if __name__ == "__main__":
-
- path = "/Users/bkhmsi/Desktop/Animal-Words/*.gif"
- save_path = os.path.join(os.path.dirname(path), "collage_loop_25_3.gif")
-
-
- width, height = 250, 250
- # width, height = 100, 100
- nx, ny = 5, 5
- n_frames = 67
- collage = np.ones((n_frames*2, width*nx, height*ny)).astype(np.uint8)*255
-
- filenames = [p for p in glob(path) if os.path.basename(p)[:-4] not in ["palestine", "amin", "collage", "collage_loop_25", "collage_loop_25_2", "collage_loop_25_3a", "collage_loop_7", "collage_1d"]]
- print(f"> {len(filenames)} Files Found")
-
- f_filenames = filenames
- filter = ["horse.gif", "giraffe.gif", "duck.gif", "turtle.gif", "camel.gif", "octopus.gif", "shark.gif"]
- # f_filenames = []
- # for file in filenames:
- # basename = os.path.basename(file)
- # if basename in filter:
- # f_filenames += [file]
-
- assert nx*ny <= len(f_filenames)
-
- for i in range(nx):
- for j in tqdm(range(ny)):
- image = Image.open(f_filenames[i*ny+j])
- assert image.is_animated
- idx = 0
- for frame_idx in range(n_frames):
- image.seek(frame_idx)
- frame = image.convert('L').copy()
- frame = frame.resize((300,300))
- collage[idx, i*width:(i+1)*width,j*height:(j+1)*height] = np.asarray(frame)[25:275, 25:275]
- idx += 1
-
- for frame_idx in reversed(range(n_frames)):
- image.seek(frame_idx)
- frame = image.convert('L').copy()
- frame = frame.resize((300,300))
- collage[idx, i*width:(i+1)*width,j*height:(j+1)*height] = np.asarray(frame)[25:275, 25:275]
- idx += 1
-
-
- imageio.mimsave(save_path, collage)
diff --git a/spaces/cactusAtSea/influencerGPT/app.py b/spaces/cactusAtSea/influencerGPT/app.py
deleted file mode 100644
index ba4c2f1f52023608f78b7da60ddfd4700e6e370f..0000000000000000000000000000000000000000
--- a/spaces/cactusAtSea/influencerGPT/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import streamlit as st
-import openai
-st.set_page_config(layout="wide")
-st.title("Influencer Post Generator")
-
-openai_key = st.text_input('Enter your openai API key...', type="password")
-
-with st.form(key='columns_in_form'):
- c1, c2, c3, c4, c5, c6 = st.columns(6)
- with c1:
- network = st.text_input('Which platform...')
- with c2:
- influencer_name = st.text_input('Enter your influencer name...')
- with c3:
- product_name = st.text_input('Enter your product name...')
- with c4:
- product_features = st.text_input('What features to highlight...')
- with c5:
- must_have = st.text_input('Must have these words...')
- with c6:
- target = st.text_input('Targeting these people...')
- submitButton = st.form_submit_button(label="Surprise Me!", help="Click to see an example post!")
-
-if openai_key:
- openai.api_key = openai_key
-
-if submitButton:
- text = 'Image you are a {} influencer called {}, you need to write a post that promotes {} \
- which targets {} with emojis. I need you to highlight these features: {}, and must include these words: {}'.format(network, influencer_name, product_name, target, product_features, must_have)
- completion = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[{"role": "user", "content": text}])
- st.text_area(label ="",value=completion["choices"][0]["message"]["content"], height =300)
-
diff --git a/spaces/candlend/vits-hoshimi/sovits/losses.py b/spaces/candlend/vits-hoshimi/sovits/losses.py
deleted file mode 100644
index 41f9be6980713a46824ae9ec5eb8fd7c515d89c5..0000000000000000000000000000000000000000
--- a/spaces/candlend/vits-hoshimi/sovits/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
- #print(logs_p)
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/evaluation/evaluator.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/evaluation/evaluator.py
deleted file mode 100644
index baf996002b2fddc8c1952408d450b5bf69394f0a..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/evaluation/evaluator.py
+++ /dev/null
@@ -1,224 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import datetime
-import logging
-import time
-from collections import OrderedDict, abc
-from contextlib import ExitStack, contextmanager
-from typing import List, Union
-import torch
-from torch import nn
-
-from detectron2.utils.comm import get_world_size, is_main_process
-from detectron2.utils.logger import log_every_n_seconds
-
-
-class DatasetEvaluator:
- """
- Base class for a dataset evaluator.
-
- The function :func:`inference_on_dataset` runs the model over
- all samples in the dataset, and have a DatasetEvaluator to process the inputs/outputs.
-
- This class will accumulate information of the inputs/outputs (by :meth:`process`),
- and produce evaluation results in the end (by :meth:`evaluate`).
- """
-
- def reset(self):
- """
- Preparation for a new round of evaluation.
- Should be called before starting a round of evaluation.
- """
- pass
-
- def process(self, inputs, outputs):
- """
- Process the pair of inputs and outputs.
- If they contain batches, the pairs can be consumed one-by-one using `zip`:
-
- .. code-block:: python
-
- for input_, output in zip(inputs, outputs):
- # do evaluation on single input/output pair
- ...
-
- Args:
- inputs (list): the inputs that's used to call the model.
- outputs (list): the return value of `model(inputs)`
- """
- pass
-
- def evaluate(self):
- """
- Evaluate/summarize the performance, after processing all input/output pairs.
-
- Returns:
- dict:
- A new evaluator class can return a dict of arbitrary format
- as long as the user can process the results.
- In our train_net.py, we expect the following format:
-
- * key: the name of the task (e.g., bbox)
- * value: a dict of {metric name: score}, e.g.: {"AP50": 80}
- """
- pass
-
-
-class DatasetEvaluators(DatasetEvaluator):
- """
- Wrapper class to combine multiple :class:`DatasetEvaluator` instances.
-
- This class dispatches every evaluation call to
- all of its :class:`DatasetEvaluator`.
- """
-
- def __init__(self, evaluators):
- """
- Args:
- evaluators (list): the evaluators to combine.
- """
- super().__init__()
- self._evaluators = evaluators
-
- def reset(self):
- for evaluator in self._evaluators:
- evaluator.reset()
-
- def process(self, inputs, outputs):
- for evaluator in self._evaluators:
- evaluator.process(inputs, outputs)
-
- def evaluate(self):
- results = OrderedDict()
- for evaluator in self._evaluators:
- result = evaluator.evaluate()
- if is_main_process() and result is not None:
- for k, v in result.items():
- assert (
- k not in results
- ), "Different evaluators produce results with the same key {}".format(k)
- results[k] = v
- return results
-
-
-def inference_on_dataset(
- model, data_loader, evaluator: Union[DatasetEvaluator, List[DatasetEvaluator], None]
-):
- """
- Run model on the data_loader and evaluate the metrics with evaluator.
- Also benchmark the inference speed of `model.__call__` accurately.
- The model will be used in eval mode.
-
- Args:
- model (callable): a callable which takes an object from
- `data_loader` and returns some outputs.
-
- If it's an nn.Module, it will be temporarily set to `eval` mode.
- If you wish to evaluate a model in `training` mode instead, you can
- wrap the given model and override its behavior of `.eval()` and `.train()`.
- data_loader: an iterable object with a length.
- The elements it generates will be the inputs to the model.
- evaluator: the evaluator(s) to run. Use `None` if you only want to benchmark,
- but don't want to do any evaluation.
-
- Returns:
- The return value of `evaluator.evaluate()`
- """
- num_devices = get_world_size()
- logger = logging.getLogger(__name__)
- logger.info("Start inference on {} batches".format(len(data_loader)))
-
- total = len(data_loader) # inference data loader must have a fixed length
- if evaluator is None:
- # create a no-op evaluator
- evaluator = DatasetEvaluators([])
- if isinstance(evaluator, abc.MutableSequence):
- evaluator = DatasetEvaluators(evaluator)
- evaluator.reset()
-
- num_warmup = min(5, total - 1)
- start_time = time.perf_counter()
- total_data_time = 0
- total_compute_time = 0
- total_eval_time = 0
- with ExitStack() as stack:
- if isinstance(model, nn.Module):
- stack.enter_context(inference_context(model))
- stack.enter_context(torch.no_grad())
-
- start_data_time = time.perf_counter()
- for idx, inputs in enumerate(data_loader):
- total_data_time += time.perf_counter() - start_data_time
- if idx == num_warmup:
- start_time = time.perf_counter()
- total_data_time = 0
- total_compute_time = 0
- total_eval_time = 0
-
- start_compute_time = time.perf_counter()
- outputs = model(inputs)
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- total_compute_time += time.perf_counter() - start_compute_time
-
- start_eval_time = time.perf_counter()
- evaluator.process(inputs, outputs)
- total_eval_time += time.perf_counter() - start_eval_time
-
- iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup)
- data_seconds_per_iter = total_data_time / iters_after_start
- compute_seconds_per_iter = total_compute_time / iters_after_start
- eval_seconds_per_iter = total_eval_time / iters_after_start
- total_seconds_per_iter = (time.perf_counter() - start_time) / iters_after_start
- if idx >= num_warmup * 2 or compute_seconds_per_iter > 5:
- eta = datetime.timedelta(seconds=int(total_seconds_per_iter * (total - idx - 1)))
- log_every_n_seconds(
- logging.INFO,
- (
- f"Inference done {idx + 1}/{total}. "
- f"Dataloading: {data_seconds_per_iter:.4f} s/iter. "
- f"Inference: {compute_seconds_per_iter:.4f} s/iter. "
- f"Eval: {eval_seconds_per_iter:.4f} s/iter. "
- f"Total: {total_seconds_per_iter:.4f} s/iter. "
- f"ETA={eta}"
- ),
- n=5,
- )
- start_data_time = time.perf_counter()
-
- # Measure the time only for this worker (before the synchronization barrier)
- total_time = time.perf_counter() - start_time
- total_time_str = str(datetime.timedelta(seconds=total_time))
- # NOTE this format is parsed by grep
- logger.info(
- "Total inference time: {} ({:.6f} s / iter per device, on {} devices)".format(
- total_time_str, total_time / (total - num_warmup), num_devices
- )
- )
- total_compute_time_str = str(datetime.timedelta(seconds=int(total_compute_time)))
- logger.info(
- "Total inference pure compute time: {} ({:.6f} s / iter per device, on {} devices)".format(
- total_compute_time_str, total_compute_time / (total - num_warmup), num_devices
- )
- )
-
- results = evaluator.evaluate()
- # An evaluator may return None when not in main process.
- # Replace it by an empty dict instead to make it easier for downstream code to handle
- if results is None:
- results = {}
- return results
-
-
-@contextmanager
-def inference_context(model):
- """
- A context where the model is temporarily changed to eval mode,
- and restored to previous mode afterwards.
-
- Args:
- model: a torch Module
- """
- training_mode = model.training
- model.eval()
- yield
- model.train(training_mode)
diff --git a/spaces/cccc-c/bingo/src/components/button-scroll-to-bottom.tsx b/spaces/cccc-c/bingo/src/components/button-scroll-to-bottom.tsx
deleted file mode 100644
index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000
--- a/spaces/cccc-c/bingo/src/components/button-scroll-to-bottom.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-import { Button, type ButtonProps } from '@/components/ui/button'
-import { IconArrowDown } from '@/components/ui/icons'
-
-export function ButtonScrollToBottom({ className, ...props }: ButtonProps) {
- const isAtBottom = useAtBottom()
-
- return (
-
- )
-}
diff --git a/spaces/chainyo/optimum-text-classification/main.py b/spaces/chainyo/optimum-text-classification/main.py
deleted file mode 100644
index d1f69f85d823711a582ae797c757ee4910b0dd4f..0000000000000000000000000000000000000000
--- a/spaces/chainyo/optimum-text-classification/main.py
+++ /dev/null
@@ -1,198 +0,0 @@
-"""⭐ Text Classification with Optimum and ONNXRuntime
-
-Streamlit application to classify text using multiple models.
-
-Author:
- - @ChainYo - https://github.com/ChainYo
-"""
-
-import plotly
-import plotly.figure_factory as ff
-import numpy as np
-import pandas as pd
-import streamlit as st
-
-from pathlib import Path
-from time import sleep
-from typing import Dict, List, Union
-
-from optimum.onnxruntime import ORTModelForSequenceClassification, ORTOptimizer, ORTQuantizer
-from optimum.onnxruntime.configuration import OptimizationConfig, AutoQuantizationConfig
-from optimum.pipelines import pipeline as ort_pipeline
-from transformers import BertTokenizer, BertForSequenceClassification
-from transformers import pipeline as pt_pipeline
-
-from utils import calculate_inference_time
-
-
-HUB_MODEL_PATH = "yiyanghkust/finbert-tone"
-BASE_PATH = Path("models")
-ONNX_MODEL_PATH = BASE_PATH.joinpath("model.onnx")
-OPTIMIZED_BASE_PATH = BASE_PATH.joinpath("optimized")
-OPTIMIZED_MODEL_PATH = OPTIMIZED_BASE_PATH.joinpath("model-optimized.onnx")
-QUANTIZED_BASE_PATH = BASE_PATH.joinpath("quantized")
-QUANTIZED_MODEL_PATH = QUANTIZED_BASE_PATH.joinpath("model-quantized.onnx")
-VAR2LABEL = {
- "pt_pipeline": "PyTorch",
- "ort_pipeline": "ONNXRuntime",
- "ort_optimized_pipeline": "ONNXRuntime (Optimized)",
- "ort_quantized_pipeline": "ONNXRuntime (Quantized)",
-}
-
-# Check if repositories exist, if not create them
-BASE_PATH.mkdir(exist_ok=True)
-QUANTIZED_BASE_PATH.mkdir(exist_ok=True)
-OPTIMIZED_BASE_PATH.mkdir(exist_ok=True)
-
-
-def get_timers(
- samples: Union[List[str], str], exp_number: int, only_mean: bool = False
-) -> Dict[str, float]:
- """
- Calculate inference time for each model for a given sample or list of samples.
-
- Parameters
- ----------
- samples : Union[List[str], str]
- Sample or list of samples to calculate inference time for.
- exp_number : int
- Number of experiments to run.
-
- Returns
- -------
- Dict[str, float]
- Dictionary of inference times for each model for the given samples.
- """
- if isinstance(samples, str):
- samples = [samples]
-
- timers: Dict[str, float] = {}
- for model in VAR2LABEL.keys():
- time_buffer = []
- st.session_state["pipeline"] = load_pipeline(model)
- for _ in range(exp_number):
- with calculate_inference_time(time_buffer):
- st.session_state["pipeline"](samples)
- timers[VAR2LABEL[model]] = np.mean(time_buffer) if only_mean else time_buffer
- return timers
-
-
-def get_plot(timers: Dict[str, Union[float, List[float]]]) -> plotly.graph_objs.Figure:
- """
- Plot the inference time for each model.
-
- Parameters
- ----------
- timers : Dict[str, Union[float, List[float]]]
- Dictionary of inference times for each model.
- """
- data = pd.DataFrame.from_dict(timers, orient="columns")
- colors = ["#84353f", "#b4524b", "#f47e58", "#ffbe67"]
- fig = ff.create_distplot(
- [data[col] for col in data.columns], data.columns, bin_size=0.001, colors=colors, show_curve=False
- )
- fig.update_layout(title_text="Inference Time", xaxis_title="Inference Time (s)", yaxis_title="Number of Samples")
- return fig
-
-
-def load_pipeline(pipeline_name: str) -> None:
- """
- Load a pipeline for a given model.
-
- Parameters
- ----------
- pipeline_name : str
- Name of the pipeline to load.
- """
- if pipeline_name == "pt_pipeline":
- model = BertForSequenceClassification.from_pretrained(HUB_MODEL_PATH, num_labels=3)
- pipeline = pt_pipeline("sentiment-analysis", tokenizer=st.session_state["tokenizer"], model=model)
- elif pipeline_name == "ort_pipeline":
- model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL_PATH, from_transformers=True)
- if not ONNX_MODEL_PATH.exists():
- model.save_pretrained(ONNX_MODEL_PATH)
- pipeline = ort_pipeline("text-classification", tokenizer=st.session_state["tokenizer"], model=model)
- elif pipeline_name == "ort_optimized_pipeline":
- if not OPTIMIZED_MODEL_PATH.exists():
- optimization_config = OptimizationConfig(optimization_level=99)
- optimizer = ORTOptimizer.from_pretrained(HUB_MODEL_PATH, feature="sequence-classification")
- optimizer.export(ONNX_MODEL_PATH, OPTIMIZED_MODEL_PATH, optimization_config=optimization_config)
- optimizer.model.config.save_pretrained(OPTIMIZED_BASE_PATH)
- model = ORTModelForSequenceClassification.from_pretrained(
- OPTIMIZED_BASE_PATH, file_name=OPTIMIZED_MODEL_PATH.name
- )
- pipeline = ort_pipeline("text-classification", tokenizer=st.session_state["tokenizer"], model=model)
- elif pipeline_name == "ort_quantized_pipeline":
- if not QUANTIZED_MODEL_PATH.exists():
- quantization_config = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
- quantizer = ORTQuantizer.from_pretrained(HUB_MODEL_PATH, feature="sequence-classification")
- quantizer.export(ONNX_MODEL_PATH, QUANTIZED_MODEL_PATH, quantization_config=quantization_config)
- quantizer.model.config.save_pretrained(QUANTIZED_BASE_PATH)
- model = ORTModelForSequenceClassification.from_pretrained(
- QUANTIZED_BASE_PATH, file_name=QUANTIZED_MODEL_PATH.name
- )
- pipeline = ort_pipeline("text-classification", tokenizer=st.session_state["tokenizer"], model=model)
- print(type(pipeline))
- return pipeline
-
-
-st.set_page_config(page_title="Optimum Text Classification", page_icon="⭐")
-st.title("⭐ Optimum Text Classification")
-st.subheader("Classify financial news tone with 🤗 Optimum and ONNXRuntime")
-st.markdown("""
-[](https://github.com/ChainYo)
-[](https://huggingface.co/ChainYo)
-[](https://www.linkedin.com/in/thomas-chaigneau-dev/)
-[](https://discord.gg/)
-""")
-
-with st.expander("⭐ Details", expanded=True):
- st.markdown(
- """
- This app is a **demo** of the [🤗 Optimum Text Classification](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum-inference-with-onnx-runtime) pipeline.
- We aim to compare the original pipeline with the ONNXRuntime pipeline.
-
- We use the [Finbert-Tone](https://huggingface.co/yiyanghkust/finbert-tone) model to classify financial news tone for the demo.
-
- You can enter multiple sentences to classify them by separating them with a `; (semicolon)`.
- """
- )
-
-if "init_models" not in st.session_state:
- st.session_state["init_models"] = True
-if st.session_state["init_models"]:
- with st.spinner(text="Loading files and models..."):
- loading_logs = st.empty()
- with loading_logs.container():
- BASE_PATH.mkdir(exist_ok=True)
- QUANTIZED_BASE_PATH.mkdir(exist_ok=True)
- OPTIMIZED_BASE_PATH.mkdir(exist_ok=True)
-
- if "tokenizer" not in st.session_state:
- tokenizer = BertTokenizer.from_pretrained(HUB_MODEL_PATH)
- st.session_state["tokenizer"] = tokenizer
- st.text("✅ Tokenizer loaded.")
- if "pipeline" not in st.session_state:
- for pipeline in VAR2LABEL.keys():
- st.session_state["pipeline"] = load_pipeline(pipeline)
- st.text("✅ Models ready.")
- sleep(2)
- loading_logs.success("🎉 Everything is ready!")
-st.session_state["init_models"] = False
-
-if "inference_timers" not in st.session_state:
- st.session_state["inference_timers"] = {}
-
-exp_number = st.slider("The number of experiments per model.", min_value=10, max_value=300, value=150)
-get_only_mean = st.checkbox("Get only the mean of the inference time for each model.", value=False)
-input_text = st.text_area(
- "Enter text to classify",
- "there is a shortage of capital, and we need extra financing; growth is strong and we have plenty of liquidity; there are doubts about our finances; profits are flat"
-)
-run_inference = st.button("🚀 Run inference")
-
-if run_inference:
- st.text("🔎 Running inference...")
- sentences = input_text.split(";")
- st.session_state["inference_timers"] = get_timers(samples=sentences, exp_number=exp_number, only_mean=get_only_mean)
- st.plotly_chart(get_plot(st.session_state["inference_timers"]), use_container_width=True)
diff --git a/spaces/charles0519/ChuanhuChatGPT/llama_func.py b/spaces/charles0519/ChuanhuChatGPT/llama_func.py
deleted file mode 100644
index c71027dd4e6f99c0c12626cbbf276f407877be04..0000000000000000000000000000000000000000
--- a/spaces/charles0519/ChuanhuChatGPT/llama_func.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import os
-import logging
-
-from llama_index import GPTSimpleVectorIndex
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-from langchain.llms import OpenAI
-import colorama
-
-
-from presets import *
-from utils import *
-
-
-def get_documents(file_src):
- documents = []
- index_name = ""
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- logging.debug(f"file: {file.name}")
- index_name += file.name
- if os.path.splitext(file.name)[1] == ".pdf":
- logging.debug("Loading PDF...")
- CJKPDFReader = download_loader("CJKPDFReader")
- loader = CJKPDFReader()
- documents += loader.load_data(file=file.name)
- elif os.path.splitext(file.name)[1] == ".docx":
- logging.debug("Loading DOCX...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- documents += loader.load_data(file=file.name)
- elif os.path.splitext(file.name)[1] == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- documents += loader.load_data(file=file.name)
- else:
- logging.debug("Loading text file...")
- with open(file.name, "r", encoding="utf-8") as f:
- text = add_space(f.read())
- documents += [Document(text)]
- index_name = sha1sum(index_name)
- return documents, index_name
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=1,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
- num_children=10,
- max_keywords_per_chunk=10,
-):
- os.environ["OPENAI_API_KEY"] = api_key
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- llm_predictor = LLMPredictor(
- llm=OpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key)
- )
- prompt_helper = PromptHelper(
- max_input_size,
- num_outputs,
- max_chunk_overlap,
- embedding_limit,
- chunk_size_limit,
- separator=separator,
- )
- documents, index_name = get_documents(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- logging.debug("构建索引中……")
- index = GPTSimpleVectorIndex(
- documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper
- )
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- return index
- except Exception as e:
- print(e)
- return None
-
-
-def chat_ai(
- api_key,
- index,
- question,
- context,
- chatbot,
-):
- os.environ["OPENAI_API_KEY"] = api_key
-
- logging.info(f"Question: {question}")
-
- response, chatbot_display, status_text = ask_ai(
- api_key,
- index,
- question,
- replace_today(PROMPT_TEMPLATE),
- REFINE_TEMPLATE,
- SIM_K,
- INDEX_QUERY_TEMPRATURE,
- context,
- )
- if response is None:
- status_text = "查询失败,请换个问法试试"
- return context, chatbot
- response = response
-
- context.append({"role": "user", "content": question})
- context.append({"role": "assistant", "content": response})
- chatbot.append((question, chatbot_display))
-
- os.environ["OPENAI_API_KEY"] = ""
- return context, chatbot, status_text
-
-
-def ask_ai(
- api_key,
- index,
- question,
- prompt_tmpl,
- refine_tmpl,
- sim_k=1,
- temprature=0,
- prefix_messages=[],
-):
- os.environ["OPENAI_API_KEY"] = api_key
-
- logging.debug("Index file found")
- logging.debug("Querying index...")
- llm_predictor = LLMPredictor(
- llm=OpenAI(
- temperature=temprature,
- model_name="gpt-3.5-turbo-0301",
- prefix_messages=prefix_messages,
- )
- )
-
- response = None # Initialize response variable to avoid UnboundLocalError
- qa_prompt = QuestionAnswerPrompt(prompt_tmpl)
- rf_prompt = RefinePrompt(refine_tmpl)
- response = index.query(
- question,
- llm_predictor=llm_predictor,
- similarity_top_k=sim_k,
- text_qa_template=qa_prompt,
- refine_template=rf_prompt,
- response_mode="compact",
- )
-
- if response is not None:
- logging.info(f"Response: {response}")
- ret_text = response.response
- nodes = []
- for index, node in enumerate(response.source_nodes):
- brief = node.source_text[:25].replace("\n", "")
- nodes.append(
- f"[{index+1}]\t{brief}...
{node.source_text}
"
- )
- new_response = ret_text + "\n----------\n" + "\n\n".join(nodes)
- logging.info(
- f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}"
- )
- os.environ["OPENAI_API_KEY"] = ""
- return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens"
- else:
- logging.warning("No response found, returning None")
- os.environ["OPENAI_API_KEY"] = ""
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/cpp/README.md b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/cpp/README.md
deleted file mode 100644
index c877d94c2834da117c49df41aa936614c175c6df..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/cpp/README.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# YOLOX-OpenVINO in C++
-
-This tutorial includes a C++ demo for OpenVINO, as well as some converted models.
-
-### Download OpenVINO models.
-
-| Model | Parameters | GFLOPs | Test Size | mAP | Weights |
-|:------| :----: | :----: | :---: | :---: | :---: |
-| [YOLOX-Nano](../../../exps/default/nano.py) | 0.91M | 1.08 | 416x416 | 25.8 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano_openvino.tar.gz) |
-| [YOLOX-Tiny](../../../exps/default/yolox_tiny.py) | 5.06M | 6.45 | 416x416 |32.8 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_tiny_openvino.tar.gz) |
-| [YOLOX-S](../../../exps/default/yolox_s.py) | 9.0M | 26.8 | 640x640 |40.5 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s_openvino.tar.gz) |
-| [YOLOX-M](../../../exps/default/yolox_m.py) | 25.3M | 73.8 | 640x640 |47.2 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_m_openvino.tar.gz) |
-| [YOLOX-L](../../../exps/default/yolox_l.py) | 54.2M | 155.6 | 640x640 |50.1 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l_openvino.tar.gz) |
-| [YOLOX-Darknet53](../../../exps/default/yolov3.py) | 63.72M | 185.3 | 640x640 |48.0 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_dark_openvino.tar.gz) |
-| [YOLOX-X](../../../exps/default/yolox_x.py) | 99.1M | 281.9 | 640x640 |51.5 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x_openvino.tar.gz) |
-
-## Install OpenVINO Toolkit
-
-Please visit [Openvino Homepage](https://docs.openvinotoolkit.org/latest/get_started_guides.html) for more details.
-
-## Set up the Environment
-
-### For Linux
-
-**Option1. Set up the environment tempororally. You need to run this command everytime you start a new shell window.**
-
-```shell
-source /opt/intel/openvino_2021/bin/setupvars.sh
-```
-
-**Option2. Set up the environment permenantly.**
-
-*Step1.* For Linux:
-```shell
-vim ~/.bashrc
-```
-
-*Step2.* Add the following line into your file:
-
-```shell
-source /opt/intel/openvino_2021/bin/setupvars.sh
-```
-
-*Step3.* Save and exit the file, then run:
-
-```shell
-source ~/.bashrc
-```
-
-
-## Convert model
-
-1. Export ONNX model
-
- Please refer to the [ONNX tutorial](../../ONNXRuntime). **Note that you should set --opset to 10, otherwise your next step will fail.**
-
-2. Convert ONNX to OpenVINO
-
- ``` shell
- cd /openvino_2021/deployment_tools/model_optimizer
- ```
-
- Install requirements for convert tool
-
- ```shell
- sudo ./install_prerequisites/install_prerequisites_onnx.sh
- ```
-
- Then convert model.
- ```shell
- python3 mo.py --input_model --input_shape [--data_type FP16]
- ```
- For example:
- ```shell
- python3 mo.py --input_model yolox_tiny.onnx --input_shape [1,3,416,416] --data_type FP16
- ```
-
- Make sure the input shape is consistent with [those](yolox_openvino.cpp#L24-L25) in cpp file.
-
-## Build
-
-### Linux
-```shell
-source /opt/intel/openvino_2021/bin/setupvars.sh
-mkdir build
-cd build
-cmake ..
-make
-```
-
-## Demo
-
-### c++
-
-```shell
-./yolox_openvino
-```
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/core/launch.py b/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/core/launch.py
deleted file mode 100644
index 9f8eec61e379f7a4179536742c16609d240b55d6..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/core/launch.py
+++ /dev/null
@@ -1,147 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Code are based on
-# https://github.com/facebookresearch/detectron2/blob/master/detectron2/engine/launch.py
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import sys
-from datetime import timedelta
-from loguru import logger
-
-import torch
-import torch.distributed as dist
-import torch.multiprocessing as mp
-
-import yolox.utils.dist as comm
-
-__all__ = ["launch"]
-
-
-DEFAULT_TIMEOUT = timedelta(minutes=30)
-
-
-def _find_free_port():
- """
- Find an available port of current machine / node.
- """
- import socket
-
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- # Binding to port 0 will cause the OS to find an available port for us
- sock.bind(("", 0))
- port = sock.getsockname()[1]
- sock.close()
- # NOTE: there is still a chance the port could be taken by other processes.
- return port
-
-
-def launch(
- main_func,
- num_gpus_per_machine,
- num_machines=1,
- machine_rank=0,
- backend="nccl",
- dist_url=None,
- args=(),
- timeout=DEFAULT_TIMEOUT,
-):
- """
- Args:
- main_func: a function that will be called by `main_func(*args)`
- num_machines (int): the total number of machines
- machine_rank (int): the rank of this machine (one per machine)
- dist_url (str): url to connect to for distributed training, including protocol
- e.g. "tcp://127.0.0.1:8686".
- Can be set to auto to automatically select a free port on localhost
- args (tuple): arguments passed to main_func
- """
- world_size = num_machines * num_gpus_per_machine
- if world_size > 1:
- # https://github.com/pytorch/pytorch/pull/14391
- # TODO prctl in spawned processes
-
- if dist_url == "auto":
- assert (
- num_machines == 1
- ), "dist_url=auto cannot work with distributed training."
- port = _find_free_port()
- dist_url = f"tcp://127.0.0.1:{port}"
-
- start_method = "spawn"
- cache = vars(args[1]).get("cache", False)
-
- # To use numpy memmap for caching image into RAM, we have to use fork method
- if cache:
- assert sys.platform != "win32", (
- "As Windows platform doesn't support fork method, "
- "do not add --cache in your training command."
- )
- start_method = "fork"
-
- mp.start_processes(
- _distributed_worker,
- nprocs=num_gpus_per_machine,
- args=(
- main_func,
- world_size,
- num_gpus_per_machine,
- machine_rank,
- backend,
- dist_url,
- args,
- ),
- daemon=False,
- start_method=start_method,
- )
- else:
- main_func(*args)
-
-
-def _distributed_worker(
- local_rank,
- main_func,
- world_size,
- num_gpus_per_machine,
- machine_rank,
- backend,
- dist_url,
- args,
- timeout=DEFAULT_TIMEOUT,
-):
- assert (
- torch.cuda.is_available()
- ), "cuda is not available. Please check your installation."
- global_rank = machine_rank * num_gpus_per_machine + local_rank
- logger.info("Rank {} initialization finished.".format(global_rank))
- try:
- dist.init_process_group(
- backend=backend,
- init_method=dist_url,
- world_size=world_size,
- rank=global_rank,
- timeout=timeout,
- )
- except Exception:
- logger.error("Process group URL: {}".format(dist_url))
- raise
-
- # Setup the local process group (which contains ranks within the same machine)
- assert comm._LOCAL_PROCESS_GROUP is None
- num_machines = world_size // num_gpus_per_machine
- for i in range(num_machines):
- ranks_on_i = list(
- range(i * num_gpus_per_machine, (i + 1) * num_gpus_per_machine)
- )
- pg = dist.new_group(ranks_on_i)
- if i == machine_rank:
- comm._LOCAL_PROCESS_GROUP = pg
-
- # synchronize is needed here to prevent a possible timeout after calling init_process_group
- # See: https://github.com/facebookresearch/maskrcnn-benchmark/issues/172
- comm.synchronize()
-
- assert num_gpus_per_machine <= torch.cuda.device_count()
- torch.cuda.set_device(local_rank)
-
- main_func(*args)
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/visual_bert/README.md b/spaces/chendl/compositional_test/transformers/examples/research_projects/visual_bert/README.md
deleted file mode 100644
index ec197ce5f350aaf20b9a1533f3a836053d8d420c..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/visual_bert/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-# VisualBERT Demo
-
-This demo shows usage of VisualBERT VQA model and is adapted from LXMERT demo present [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/lxmert/demo.ipynb).
-1. make a virtualenv: ``virtualenv venv`` and activate ``source venv/bin/activate``
-2. install reqs: ``pip install -r ./requirements.txt``
-3. usage is as shown in demo.ipynb
diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/monotonic_align/__init__.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/monotonic_align/__init__.py
deleted file mode 100644
index aed94600a6b01f4322b371b0c57d5b05713c4dac..0000000000000000000000000000000000000000
--- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/monotonic_align/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-
-def maximum_path(neg_cent, mask):
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/include/site/python3.11/greenlet.h b/spaces/chuan-hd/law-assistant-chatbot/.venv/include/site/python3.11/greenlet.h
deleted file mode 100644
index d02a16e43426fb1c1bb286f1cda463cb9b1185ad..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/include/site/python3.11/greenlet.h
+++ /dev/null
@@ -1,164 +0,0 @@
-/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */
-
-/* Greenlet object interface */
-
-#ifndef Py_GREENLETOBJECT_H
-#define Py_GREENLETOBJECT_H
-
-
-#include
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/* This is deprecated and undocumented. It does not change. */
-#define GREENLET_VERSION "1.0.0"
-
-#ifndef GREENLET_MODULE
-#define implementation_ptr_t void*
-#endif
-
-typedef struct _greenlet {
- PyObject_HEAD
- PyObject* weakreflist;
- PyObject* dict;
- implementation_ptr_t pimpl;
-} PyGreenlet;
-
-#define PyGreenlet_Check(op) (op && PyObject_TypeCheck(op, &PyGreenlet_Type))
-
-
-/* C API functions */
-
-/* Total number of symbols that are exported */
-#define PyGreenlet_API_pointers 12
-
-#define PyGreenlet_Type_NUM 0
-#define PyExc_GreenletError_NUM 1
-#define PyExc_GreenletExit_NUM 2
-
-#define PyGreenlet_New_NUM 3
-#define PyGreenlet_GetCurrent_NUM 4
-#define PyGreenlet_Throw_NUM 5
-#define PyGreenlet_Switch_NUM 6
-#define PyGreenlet_SetParent_NUM 7
-
-#define PyGreenlet_MAIN_NUM 8
-#define PyGreenlet_STARTED_NUM 9
-#define PyGreenlet_ACTIVE_NUM 10
-#define PyGreenlet_GET_PARENT_NUM 11
-
-#ifndef GREENLET_MODULE
-/* This section is used by modules that uses the greenlet C API */
-static void** _PyGreenlet_API = NULL;
-
-# define PyGreenlet_Type \
- (*(PyTypeObject*)_PyGreenlet_API[PyGreenlet_Type_NUM])
-
-# define PyExc_GreenletError \
- ((PyObject*)_PyGreenlet_API[PyExc_GreenletError_NUM])
-
-# define PyExc_GreenletExit \
- ((PyObject*)_PyGreenlet_API[PyExc_GreenletExit_NUM])
-
-/*
- * PyGreenlet_New(PyObject *args)
- *
- * greenlet.greenlet(run, parent=None)
- */
-# define PyGreenlet_New \
- (*(PyGreenlet * (*)(PyObject * run, PyGreenlet * parent)) \
- _PyGreenlet_API[PyGreenlet_New_NUM])
-
-/*
- * PyGreenlet_GetCurrent(void)
- *
- * greenlet.getcurrent()
- */
-# define PyGreenlet_GetCurrent \
- (*(PyGreenlet * (*)(void)) _PyGreenlet_API[PyGreenlet_GetCurrent_NUM])
-
-/*
- * PyGreenlet_Throw(
- * PyGreenlet *greenlet,
- * PyObject *typ,
- * PyObject *val,
- * PyObject *tb)
- *
- * g.throw(...)
- */
-# define PyGreenlet_Throw \
- (*(PyObject * (*)(PyGreenlet * self, \
- PyObject * typ, \
- PyObject * val, \
- PyObject * tb)) \
- _PyGreenlet_API[PyGreenlet_Throw_NUM])
-
-/*
- * PyGreenlet_Switch(PyGreenlet *greenlet, PyObject *args)
- *
- * g.switch(*args, **kwargs)
- */
-# define PyGreenlet_Switch \
- (*(PyObject * \
- (*)(PyGreenlet * greenlet, PyObject * args, PyObject * kwargs)) \
- _PyGreenlet_API[PyGreenlet_Switch_NUM])
-
-/*
- * PyGreenlet_SetParent(PyObject *greenlet, PyObject *new_parent)
- *
- * g.parent = new_parent
- */
-# define PyGreenlet_SetParent \
- (*(int (*)(PyGreenlet * greenlet, PyGreenlet * nparent)) \
- _PyGreenlet_API[PyGreenlet_SetParent_NUM])
-
-/*
- * PyGreenlet_GetParent(PyObject* greenlet)
- *
- * return greenlet.parent;
- *
- * This could return NULL even if there is no exception active.
- * If it does not return NULL, you are responsible for decrementing the
- * reference count.
- */
-# define PyGreenlet_GetParent \
- (*(PyGreenlet* (*)(PyGreenlet*)) \
- _PyGreenlet_API[PyGreenlet_GET_PARENT_NUM])
-
-/*
- * deprecated, undocumented alias.
- */
-# define PyGreenlet_GET_PARENT PyGreenlet_GetParent
-
-# define PyGreenlet_MAIN \
- (*(int (*)(PyGreenlet*)) \
- _PyGreenlet_API[PyGreenlet_MAIN_NUM])
-
-# define PyGreenlet_STARTED \
- (*(int (*)(PyGreenlet*)) \
- _PyGreenlet_API[PyGreenlet_STARTED_NUM])
-
-# define PyGreenlet_ACTIVE \
- (*(int (*)(PyGreenlet*)) \
- _PyGreenlet_API[PyGreenlet_ACTIVE_NUM])
-
-
-
-
-/* Macro that imports greenlet and initializes C API */
-/* NOTE: This has actually moved to ``greenlet._greenlet._C_API``, but we
- keep the older definition to be sure older code that might have a copy of
- the header still works. */
-# define PyGreenlet_Import() \
- { \
- _PyGreenlet_API = (void**)PyCapsule_Import("greenlet._C_API", 0); \
- }
-
-#endif /* GREENLET_MODULE */
-
-#ifdef __cplusplus
-}
-#endif
-#endif /* !Py_GREENLETOBJECT_H */
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/location.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/location.py
deleted file mode 100644
index 50f761d2d2a13bd101a7db9c259fedc98eed52cf..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/location.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from typing import NamedTuple
-
-
-class FeatureLibLocation(NamedTuple):
- """A location in a feature file"""
-
- file: str
- line: int
- column: int
-
- def __str__(self):
- return f"{self.file}:{self.line}:{self.column}"
diff --git a/spaces/cihyFjudo/fairness-paper-search/Alexandra Stan Playboy Pictures UPD.md b/spaces/cihyFjudo/fairness-paper-search/Alexandra Stan Playboy Pictures UPD.md
deleted file mode 100644
index dfb823cc2afec115e1ca8de6ddaf08cb2303cf63..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Alexandra Stan Playboy Pictures UPD.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Below you can find your search result for alexandra stan. Since you are a big fan of alexandra stan pictures I would suggest to also visit my friend sites and get more free sex pictures of alexandra stan over there in case you already checked all alexandra stan sex picture galleries here at Fooxy Babes.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Free Download Salaam-E-Ishq In Hindi Dubbed Torrent.md b/spaces/cihyFjudo/fairness-paper-search/Free Download Salaam-E-Ishq In Hindi Dubbed Torrent.md
deleted file mode 100644
index e0dda17748ca9067bf718bbc88335b38bdf874a1..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Free Download Salaam-E-Ishq In Hindi Dubbed Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Free Download Salaam-E-Ishq In Hindi Dubbed Torrent
Kaspersky Lab offers another way to activate software on a new computer, without needing to enter your activation code or license key, by using the website my.kaspersky.com. To use this method, you will first need to create an account and then connect your application to your account.
-
In 2022 March Kaspersky received a cyber immunity registration trademark in the United States. The registration gives Kaspersky the exclusive right to use Kaspersky Cyber Immunity to identify it's products. It also confirms that the trademark has distinctive features checked against specific criteria by national government agencies.[95]
When you disable self defense in settings, make sure you click "ok" to confirm otherwise you will get that error message. I made the same mistake and I ended up having a BSOD problem. (Did a quick recovery, reistalled kaspersky and followed these steps) Now it's rolling well.
-
1.Disable the self defence 2. change the product status to "Release" from "Beta" in the registry. 3.reboot the computer 4.Now it ask to activate the kaspersky..! 5.Now do the same registry hack..! by changing product status "Release" to "Beta"...
-
i spent couple of hours to make my kaspersky full activated.finally i found this trick..it greats man..thanks a lot.may God Bless You all..if you have trick coming soon for kaspersky activation please email me...Egimbb@gmail.com
-
1. Remove any keys you have installed. 2. Do the same as you would when doing the beta hack disable self defence and protection switch off kaspersky. 3. Change the registry key from beta to release. 4. Turn Kaspersky on 5. You will now have the option of trial version, activate your trial for 30 days. 6. Enable self defence and protection. 7. To re set the trial do the beta hack but dont try to activate the beta version cause it wont work but instead follow steps 1-6 again and you will find the option of trial version there again.
-
Is there is any hacker in the world who can hack kaspersky registration or give me activation code of kaspersky latest version. I think there is no one in the world who can hack kaspersky activation code and give it to me. If you can help please please give it's activation code of kaspersky latest version activation code
-
2.3. Access to Product. Product is provided by means of granting to User access to the web-based portal at ksc.kaspersky.com ("Portal") or successor URL. User will identify the username and password that are used for access to User's account on Portal. User will not share its username or password with any third party and will be responsible and liable for the acts or omissions of any person who accesses Product using passwords or access procedures provided to User. Kaspersky Lab reserves the right to refuse registration of, or to suspend or cancel, login IDs used by User to access the Product for any reason, including if User violates the terms and conditions set forth in this Agreement.
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Master Lock Combination Using Serial Number.md b/spaces/cihyFjudo/fairness-paper-search/Master Lock Combination Using Serial Number.md
deleted file mode 100644
index 54fe56f0fa4468d5dcfec586ee71e25437a80e73..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Master Lock Combination Using Serial Number.md
+++ /dev/null
@@ -1,110 +0,0 @@
-## Master Lock Combination Using Serial Number
-
-
-
-
-
-
-
-
-
-**Click Here === [https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txld9&sa=D&sntz=1&usg=AOvVaw3\_y7b2l9tFHUpIYlCIBMCk](https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txld9&sa=D&sntz=1&usg=AOvVaw3\_y7b2l9tFHUpIYlCIBMCk)**
-
-
-
-
-
-
-
-
-
-
-
- Here is a possible title and article for your keyword:
-
-# How to Unlock a Master Lock with the Serial Number
-
-
-
-If you have a Master Lock that you forgot the combination to, don't worry. You can still unlock it with the serial number. The serial number is a small number stamped on the back of the lock, usually near the bottom. Here are some steps to help you unlock your Master Lock with the serial number.
-
-
-
-1. Find the serial number on your lock. It should be a four or five digit number. Write it down or take a picture of it.
-
-2. Visit the Master Lock website and fill out the lost combination form. You will need to provide your name, address, phone number, email address, and the serial number of your lock. You will also need to print out the form and have it notarized by a Notary Public to prove that you are the owner of the lock.
-
-3. Mail the notarized form to Master Lock Warehouse at 24 North Free Port Drive, Nogales, AZ 85621. You can also scan and email the form to [combos@masterlock.com](mailto:combos@masterlock.com). It may take up to 4-6 weeks for Master Lock to process your request and send you your combination.
-
-4. Alternatively, you can bring your lock to a Master Lock distributor or retailer and ask them to contact Master Lock on your behalf. They will need to see your lock and verify that it is not attached to anything. They may charge a fee for this service.
-
-5. Another option is to send a photo of your lock via the contact form on the Master Lock website. The photo should clearly show the serial number and that the lock is not attached to anything. Master Lock will respond in 7-10 days with your combination.
-
-
-
-Once you receive your combination, you can unlock your Master Lock by turning the dial clockwise three times, stopping on the first number, then turning counterclockwise one full turn past the first number and stopping on the second number, then turning clockwise again and stopping on the third number. Pull up on the shackle to open the lock.
-
-
-
-If you want to change your combination, you can use the reset tool that came with your lock. Insert it into the shackle hole and turn it so that the Master logo faces you. Then rotate the dial three times clockwise to clear the old combination and enter your new one. Remove the reset tool and close the lock.
-
-
-
-Remember to write down your combination in a safe place or use an online service like Master Lock Vault to store it securely.
-
-Here is a possible continuation of the article:
-
-## Why Use a Master Lock with a Serial Number?
-
-
-
-A Master Lock with a serial number is a great way to secure your belongings and valuables. Unlike other locks that have fixed combinations or keys that can be lost or stolen, a Master Lock with a serial number allows you to reset your combination or retrieve it if you forget it. This way, you can always access your lock and change your combination as often as you like.
-
-
-
-A Master Lock with a serial number also has other benefits, such as:
-
-
-
-- It is durable and resistant to weather, rust, and corrosion.
-
-- It has a hardened steel shackle that can withstand cutting and prying.
-
-- It has a smooth dial that is easy to turn and align.
-
-- It comes in different sizes, colors, and styles to suit your preferences and needs.
-
-- It has a lifetime warranty from Master Lock.
-
-
-
-You can use a Master Lock with a serial number for various purposes, such as locking your locker, bike, shed, gate, suitcase, or storage unit. You can also use it for indoor or outdoor applications. Just make sure to choose the right size and type of lock for your intended use.
-
-
-
-## How to Prevent Losing Your Combination
-
-
-
-While it is possible to unlock your Master Lock with the serial number if you lose your combination, it is still better to prevent losing it in the first place. Here are some tips to help you remember your combination and avoid losing it:
-
-
-
-- Choose a combination that is easy for you to remember but hard for others to guess. You can use letters or numbers that have a personal meaning to you, such as your initials, birthday, favorite color, or pet's name.
-
-- Write down your combination in a safe place that only you can access. You can use a notebook, a sticky note, a password manager app, or an online service like Master Lock Vault. Do not write it on the lock itself or anywhere near it.
-
-- Practice opening your lock several times until you memorize your combination. You can also say it out loud or in your head as you dial it.
-
-- Change your combination regularly to keep it fresh and secure. You can use the reset tool that came with your lock to do this. Just make sure to write down your new combination and update it in your records.
-
-
-
-By following these tips, you can reduce the chances of losing your combination and having to unlock your Master Lock with the serial number. However, if you do lose it, don't panic. You can always contact Master Lock and get your combination back.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Microelectronic Circuits Sedra Smith 6th edition Solution Manual-torrent.348 Get Access to the Most Comprehensive and Updated Resource.md b/spaces/cihyFjudo/fairness-paper-search/Microelectronic Circuits Sedra Smith 6th edition Solution Manual-torrent.348 Get Access to the Most Comprehensive and Updated Resource.md
deleted file mode 100644
index c0dabae09b364cc2ac37a6363f088feffce00a9f..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Microelectronic Circuits Sedra Smith 6th edition Solution Manual-torrent.348 Get Access to the Most Comprehensive and Updated Resource.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Microelectronic Circuits Sedra Smith 6th edition Solution Manual-torrent.348
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Naked Girl Pics No Face VERIFIED.md b/spaces/cihyFjudo/fairness-paper-search/Naked Girl Pics No Face VERIFIED.md
deleted file mode 100644
index 9f9906dbfeff790cb944314824f4a515671e5451..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Naked Girl Pics No Face VERIFIED.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
Lots of casual relationships begin over dating apps and text nowadays, and sending photos can be a fun way to keep the heat going strong. If you want to get pics from a special lady, keeping things classy and respectful is the key to success. Here are some pointers on how to get pics from a girl over text using clear communication, proper timing, and a little patience!
-
1 Beautiful naked label without face 2 Girls cum on the tummy 3 Female pussy first -person view 4 Naked boobs at home on the bed 5 Naked woman without head 6 Beautiful chest and flat stomach 7 Naked breasts Home 8 Women with shaved pubis 9 Beautiful Sisechki Selfie 10 Naked chest without face naked at home 11 Super tits nipples selfie 12 Beautiful naked women without face 13 Naked body without face 14 Beautiful girls cum on their stomach 15 Selfies Private naked without face 16 18 year old brunette naked with piercing on the mirror 17 Naked women in bed without face 18 Cooked beautifully 19 Beautiful Japanese boobs 20 Beautiful naked women without face 21 MoralHexx 22 Beautiful naked body sexy homemade 23 Naked female breasts without face 24 Beautiful naked women in the bathroom 25 Girls in panties crustacean 26 Exciting naked women 27 Naked slender women with a press 28 Modern girls nude 29 Beautiful neat boobs 30 Intimate parts of the female body 31 Naked female charms 32 Naked female body private 33 Mike Dowson Anna Krivosheina 34 Mary shum 35 The most beautiful naked female body 36 Feminine Body lines Nude 37 Beautiful naked body of a woman 38 Perfect female body nude 39 Sleeping girls naked 40 The girl folded her hands on her chest nude 41 Sylvia Karuzo Mavrin naked 42 Naked sports women 43 Beautiful naked female body 44 Sylvia Caruso Erotiac 45 Beautiful female body nude 46 Girl with big tits lies selfie 47 Naked female charms 48 The girl took a picture of the boobs 49 Blonde's bare chest without face 50 Girl with big tits in bed 51 Beautiful naked Asians in bed 52 Ordinary naked female body 53 The most beautiful naked women 54 Madison Scott finished on the chest 55 Seductive girls nude 56 Beautiful female body nude 57 Beauties naked with a toy 58 Erotic photos 59 60 61 62 63 Nude Body No Face 64 65 66 67 68
1 Beautiful naked women without face 2Private selfie breast in the toilet 3Sexy girls take a picture of themselves 4Naked in the bathroom in front of the mirror 5Beautiful erotic selfies 6Huge boobs from above 7Avari Rain 8Self -shootings of beautiful girls nude 9Tits in the hostel selfie without face 10 Beautiful Sisechki Selfie 11Beautiful breasts in the toilet 12Took a picture of her big boobs 13Celine Farach naked 14Selfie Big breasts lying 15Sexy selfie miniature 16Selfie boobs waist -back lying on the back 17Erotic selfie in bed 18Erotic selfie in the entrance 19Tits in the bathroom in front of the mirror 20Selfies Big breasts lying 21Erotic selfies of girls 22 I took a picture of my chest in bed 23Chrissy Stiles Tits Selfie 24Ava Addams Naked Private 25Neat elastic breasts 26Selfie boobs with brunettes face 27Salyukova Dalvina Selfie boobs 28Erotic photos of girls selfie 29Beautiful chest blonde selfie 30View of the eyes of a girl xxx 31Big boobs in front of the mirror 32 Big ass in front of the mirror 33Selfie Big breasts iPhone 34Girls take pictures of themselves nude 35Naked girls in the mirror 36Original erotic selfies
-
She was neither tall nor short, nor stout norslender; nor was she beautiful, nor was sheplain. She wore a figured lawn, cut a little lowin the back, that exposed a round, soft nuquewith a few little clinging circlets of soft, brownhair. Her hat was of white straw, cocked upon the side with a bunch of pansies, and shewore gray lisle-thread gloves. The girl seemedvery warm and kept mopping her face. Shevainly sought her fan, then she fanned herselfwith her handkerchief, and finally made an attemptto open the window. She might as wellhave tried to move the banks of Red river.
-
The girls who came in wagons and onponies from a distance wore, for the mostpart, calico dresses and sun-bonnets. Theirfinery they brought along in pillow-slips orpinned up in sheets and towels. With thesethey at once retired to an upper room; later toappear be-ribboned and be-furbelowed; theirfaces masked with starch powder, but never atouch of rouge.
-
THE sun was just far enough in the westto send inviting shadows. In the centreof a small field, and in the shade of ahaystack which was there, a girl lay sleeping.She had slept long and soundly, when somethingawoke her as suddenly as if it had been ablow. She opened her eyes and stared a momentup in the cloudless sky. She yawnedand stretched her long brown legs and arms,lazily. Then she arose, never minding thebits of straw that clung to her black hair, toher red bodice, and the blue cotonade skirtthat did not reach her naked ankles.
-
One of the men - a pleasant-faced youngster- drew a sketch book from his pocket andbegan to make a picture of the girl. Shestayed motionless, her hands behind her, andher wide eyes fixed earnestly upon him.
-
When the woman asked her again afteranother week if she were still pleased, she wasnot so sure. And again when she questionedCaline the girl turned away, and went to sitbehind the big, yellow cistern, to cry unobserved.For she knew now that it was not thegreat city and its crowds of people she hadso eagerly sought; but the pleasant-faced boy,who had made her picture that day under themulberry tree.
-
All the morning Janie had been escorting aprocession of street Arabs up and down thestairs to view the remains. One of them - alittle girl, who had had her face washed andhad made a species of toilet for the occasion- refused to be dragged away. She stayedseated as if at an entertainment, fascinatedalternately by the long, still figure of MamzelleAglaé, the mumbling lips of Purgatory Mary,and the silver candlesticks.
-
-
After they quitted the store, 'Polyte, with aperplexed expression upon his face, leaned fora moment against one of the whitewashedpillars, watching the girl cross the yard. Shehad folded her sunbonnet into a pad, whichshe placed beneath the heavy pail that shebalanced upon her head. She walked upright,with a slow, careful tread. Two of the yarddogs that had stood a moment before uponthe threshold of the store door, quivering andwagging their tails, were following her now,with a little businesslike trot. 'Polyte calledthem back.
-
Once, down the bank of the bayou, when'Polyte came upon Azélie unexpectedly, andwas therefore unprepared to resist the shockof her sudden appearance, he seized her in hisarms, and covered her face with kisses. Shewas not indignant; she was not flustered oragitated, as might have been a susceptible,coquettish girl; she was only astonished, andannoyed.
-
The day was a warm one, but that did notprevent a creepy chilliness seizing hold of me.The feeling was generated by disappointment,anger, dismay and various other disagreeablesensations which I cannot find names for,Had I been intentionally deceived and misled?Was this some impertinent pleasantry on thepart of Cavanelle? Or rather had not thegirl's voice undergone some hideous transformationsince her brother had listened to it?I dreaded to look at him, fearing to see horrorand astonishment depicted on his face. WhenI did look, his expression was earnestly attentiveand beamed approval of the strains towhich he measured time by a slow, satisfiedmotion of the hand.
-
Late in the afternoon she went and stood onher doorstep, and looked uneasily and anxiouslyout upon the almost deserted street.When a little girl came walking by, - a sweetchild with a frank and innocent face, uponwhose word she knew she could rely, - TanteCat'rinette invited her to enter.
-
The main reason we keep our girl's face off line is because we believe she shouldn't end up on Facebook, or anywhere on the Internet, until she's ready. Until she wants that. That choice belongs to my daughter, and until she can tell us otherwise, we'll keep her face off the web.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/trustedhost.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/trustedhost.py
deleted file mode 100644
index 08d7e035315677856fd2cd0be2044689b57619bf..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/trustedhost.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from starlette.middleware.trustedhost import ( # noqa
- TrustedHostMiddleware as TrustedHostMiddleware,
-)
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vc1dsp_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vc1dsp_init_arm.c
deleted file mode 100644
index 5f2c75904826a22cb4c5c138bc818ae817811764..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vc1dsp_init_arm.c
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-
-#include "libavutil/attributes.h"
-#include "libavutil/arm/cpu.h"
-#include "libavcodec/arm/startcode.h"
-#include "libavcodec/vc1dsp.h"
-#include "vc1dsp.h"
-
-av_cold void ff_vc1dsp_init_arm(VC1DSPContext *dsp)
-{
- int cpu_flags = av_get_cpu_flags();
-
-#if HAVE_ARMV6
- if (have_setend(cpu_flags))
- dsp->startcode_find_candidate = ff_startcode_find_candidate_armv6;
-#endif
- if (have_neon(cpu_flags))
- ff_vc1dsp_init_neon(dsp);
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cri_parser.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cri_parser.c
deleted file mode 100644
index 9295f823ce5d246788f3ee8b1996abf3993d8ccc..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cri_parser.c
+++ /dev/null
@@ -1,105 +0,0 @@
-/*
- * CRI parser
- * Copyright (c) 2021 Paul B Mahol
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * CRI parser
- */
-
-#include "libavutil/bswap.h"
-#include "libavutil/common.h"
-
-#include "parser.h"
-
-typedef struct CRIParser {
- ParseContext pc;
- int count;
- int chunk;
- int read_bytes;
- int skip_bytes;
-} CRIParser;
-
-#define KEY (((uint64_t)'\1' << 56) | ((uint64_t)'\0' << 48) | \
- ((uint64_t)'\0' << 40) | ((uint64_t)'\0' << 32) | \
- ((uint64_t)'\4' << 24) | ((uint64_t)'\0' << 16) | \
- ((uint64_t)'\0' << 8) | ((uint64_t)'\0' << 0))
-
-static int cri_parse(AVCodecParserContext *s, AVCodecContext *avctx,
- const uint8_t **poutbuf, int *poutbuf_size,
- const uint8_t *buf, int buf_size)
-{
- CRIParser *bpc = s->priv_data;
- uint64_t state = bpc->pc.state64;
- int next = END_NOT_FOUND, i = 0;
-
- s->pict_type = AV_PICTURE_TYPE_I;
- s->key_frame = 1;
- s->duration = 1;
-
- *poutbuf_size = 0;
- *poutbuf = NULL;
-
- for (; i < buf_size; i++) {
- state = (state << 8) | buf[i];
- bpc->read_bytes++;
-
- if (bpc->skip_bytes > 0) {
- bpc->skip_bytes--;
- if (bpc->skip_bytes == 0)
- bpc->read_bytes = 0;
- } else {
- if (state != KEY)
- continue;
- }
-
- if (bpc->skip_bytes == 0 && bpc->read_bytes >= 8) {
- bpc->skip_bytes = av_bswap32(state & 0xFFFFFFFF);
- bpc->chunk = state >> 32;
- bpc->read_bytes = 0;
- bpc->count++;
- }
-
- if (bpc->chunk == 0x01000000 && bpc->skip_bytes == 4 &&
- bpc->read_bytes == 0 && bpc->count > 1) {
- next = i - 7;
- break;
- }
- }
-
- bpc->pc.state64 = state;
- if (ff_combine_frame(&bpc->pc, next, &buf, &buf_size) < 0) {
- *poutbuf = NULL;
- *poutbuf_size = 0;
- return buf_size;
- }
-
- *poutbuf = buf;
- *poutbuf_size = buf_size;
-
- return next;
-}
-
-const AVCodecParser ff_cri_parser = {
- .codec_ids = { AV_CODEC_ID_CRI },
- .priv_data_size = sizeof(CRIParser),
- .parser_parse = cri_parse,
- .parser_close = ff_parse_close,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/diractab.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/diractab.c
deleted file mode 100644
index 816b9393ba00596bc0c1e62961fd7e1195648b82..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/diractab.c
+++ /dev/null
@@ -1,89 +0,0 @@
-/*
- * Copyright (C) 2016 Open Broadcast Systems Ltd.
- * Author (C) 2016 Rostislav Pehlivanov
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "diractab.h"
-
-const uint8_t ff_dirac_default_qmat[7][4][4] = {
- { { 5, 3, 3, 0}, { 0, 4, 4, 1}, { 0, 5, 5, 2}, { 0, 6, 6, 3} },
- { { 4, 2, 2, 0}, { 0, 4, 4, 2}, { 0, 5, 5, 3}, { 0, 7, 7, 5} },
- { { 5, 3, 3, 0}, { 0, 4, 4, 1}, { 0, 5, 5, 2}, { 0, 6, 6, 3} },
- { { 8, 4, 4, 0}, { 0, 4, 4, 0}, { 0, 4, 4, 0}, { 0, 4, 4, 0} },
- { { 8, 4, 4, 0}, { 0, 4, 4, 0}, { 0, 4, 4, 0}, { 0, 4, 4, 0} },
- { { 0, 4, 4, 8}, { 0, 8, 8, 12}, { 0, 13, 13, 17}, { 0, 17, 17, 21} },
- { { 3, 1, 1, 0}, { 0, 4, 4, 2}, { 0, 6, 6, 5}, { 0, 9, 9, 7} },
-};
-
-const int32_t ff_dirac_qscale_tab[116] = {
- 4, 5, 6, 7, 8, 10, 11, 13,
- 16, 19, 23, 27, 32, 38, 45, 54,
- 64, 76, 91, 108, 128, 152, 181, 215,
- 256, 304, 362, 431, 512, 609, 724, 861,
- 1024, 1218, 1448, 1722, 2048, 2435, 2896, 3444,
- 4096, 4871, 5793, 6889, 8192, 9742, 11585, 13777,
- 16384, 19484, 23170, 27554, 32768, 38968, 46341, 55109,
- 65536, 77936, 92682, 110218, 131072, 155872, 185364, 220436,
- 262144, 311744, 370728, 440872, 524288, 623487, 741455, 881744,
- 1048576, 1246974, 1482910, 1763488, 2097152, 2493948, 2965821, 3526975,
- 4194304, 4987896, 5931642, 7053950, 8388608, 9975792, 11863283, 14107901,
- 16777216, 19951585, 23726566, 28215802, 33554432, 39903169, 47453133, 56431603,
- 67108864, 79806339, 94906266, 112863206, 134217728, 159612677, 189812531, 225726413,
- 268435456, 319225354, 379625062, 451452825, 536870912, 638450708, 759250125, 902905651,
- 1073741824,1276901417,1518500250,1805811301,/*2147483648,2553802834,3037000500,3611622603,
- 4294967296*/
-};
-
-const int32_t ff_dirac_qoffset_intra_tab[120] = {
- 1, 2, 3, 4, 4, 5, 6, 7,
- 8, 10, 12, 14, 16, 19, 23, 27,
- 32, 38, 46, 54, 64, 76, 91, 108,
- 128, 152, 181, 216, 256, 305, 362, 431,
- 512, 609, 724, 861, 1024, 1218, 1448, 1722,
- 2048, 2436, 2897, 3445, 4096, 4871, 5793, 6889,
- 8192, 9742, 11585, 13777, 16384, 19484, 23171, 27555,
- 32768, 38968, 46341, 55109, 65536, 77936, 92682, 110218,
- 131072, 155872, 185364, 220436, 262144, 311744, 370728, 440872,
- 524288, 623487, 741455, 881744, 1048576, 1246974, 1482911, 1763488,
- 2097152, 2493948, 2965821, 3526975, 4194304, 4987896, 5931642, 7053951,
- 8388608, 9975793, 11863283, 14107901, 16777216, 19951585, 23726567, 28215802,
- 33554432, 39903170, 47453133, 56431603, 67108864, 79806339, 94906266, 112863207,
- 134217728, 159612677, 189812531, 225726413, 268435456, 319225354, 379625063, 451452826,
- 536870912, 638450709, 759250125, 902905651,1073741824,1276901417,1518500250,1805811302,
- /*2147483648, 2553802834, 3037000500, 3611622603, 4294967296,*/
-};
-
-const int ff_dirac_qoffset_inter_tab[122] = {
- 1, 2, 2, 3, 3, 4, 4, 5,
- 6, 7, 9, 10, 12, 14, 17, 20,
- 24, 29, 34, 41, 48, 57, 68, 81,
- 96, 114, 136, 162, 192, 228, 272, 323,
- 384, 457, 543, 646, 768, 913, 1086, 1292,
- 1536, 1827, 2172, 2583, 3072, 3653, 4344, 5166,
- 6144, 7307, 8689, 10333, 12288, 14613, 17378, 20666,
- 24576, 29226, 34756, 41332, 49152, 58452, 69512, 82664,
- 98304, 116904, 139023, 165327, 196608, 233808, 278046, 330654,
- 393216, 467615, 556091, 661308, 786432, 935231, 1112183, 1322616,
- 1572864, 1870461, 2224366, 2645231, 3145728, 3740922, 4448731, 5290463,
- 6291456, 7481844, 8897462, 10580926, 12582912, 14963688, 17794925, 21161851,
- 25165824, 29927377, 35589850, 42323702, 50331648, 59854754, 71179699, 84647405,
- 100663296, 119709508, 142359398, 169294809, 201326592, 239419016, 284718797, 338589619,
- 402653184, 478838031, 569437594, 677179238, 805306368, 957676063,1138875188,1354358476,
- 1610612736, 1915352125, /*2277750375, 2708716952, 3221225472, 3830704250,*/
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/huffyuv.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/huffyuv.c
deleted file mode 100644
index aaba313bf11341b3f36e96297837fe9d2920431e..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/huffyuv.c
+++ /dev/null
@@ -1,84 +0,0 @@
-/*
- * huffyuv codec for libavcodec
- *
- * Copyright (c) 2002-2014 Michael Niedermayer
- *
- * see https://multimedia.cx/huffyuv.txt for a description of
- * the algorithm used
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * huffyuv codec for libavcodec.
- */
-
-#include
-
-#include "libavutil/attributes.h"
-#include "libavutil/error.h"
-#include "libavutil/log.h"
-#include "libavutil/mem.h"
-
-#include "huffyuv.h"
-
-int ff_huffyuv_generate_bits_table(uint32_t *dst, const uint8_t *len_table, int n)
-{
- int lens[33] = { 0 };
- uint32_t codes[33];
-
- for (int i = 0; i < n; i++)
- lens[len_table[i]]++;
-
- codes[32] = 0;
- for (int i = FF_ARRAY_ELEMS(lens) - 1; i > 0; i--) {
- if ((lens[i] + codes[i]) & 1) {
- av_log(NULL, AV_LOG_ERROR, "Error generating huffman table\n");
- return -1;
- }
- codes[i - 1] = (lens[i] + codes[i]) >> 1;
- }
- for (int i = 0; i < n; i++) {
- if (len_table[i])
- dst[i] = codes[len_table[i]]++;
- }
- return 0;
-}
-
-av_cold int ff_huffyuv_alloc_temp(uint8_t *temp[3], uint16_t *temp16[3], int width)
-{
- int i;
-
- for (i=0; i<3; i++) {
- temp[i] = av_malloc(4 * width + 16);
- if (!temp[i])
- return AVERROR(ENOMEM);
- temp16[i] = (uint16_t*)temp[i];
- }
- return 0;
-}
-
-av_cold void ff_huffyuv_common_end(uint8_t *temp[3], uint16_t *temp16[3])
-{
- int i;
-
- for(i = 0; i < 3; i++) {
- av_freep(&temp[i]);
- temp16[i] = NULL;
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp3dsp_init_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp3dsp_init_mips.c
deleted file mode 100644
index 4252ff790ea5adc7b7484f5c4ad2eb403fa1eddf..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp3dsp_init_mips.c
+++ /dev/null
@@ -1,50 +0,0 @@
-
-/*
- * Copyright (c) 2018 gxw
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/mips/cpu.h"
-#include "config.h"
-#include "libavutil/attributes.h"
-#include "libavcodec/avcodec.h"
-#include "libavcodec/vp3dsp.h"
-#include "vp3dsp_mips.h"
-
-av_cold void ff_vp3dsp_init_mips(VP3DSPContext *c, int flags)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_mmi(cpu_flags)) {
- c->put_no_rnd_pixels_l2 = ff_put_no_rnd_pixels_l2_mmi;
-
- c->idct_add = ff_vp3_idct_add_mmi;
- c->idct_put = ff_vp3_idct_put_mmi;
- c->idct_dc_add = ff_vp3_idct_dc_add_mmi;
- }
-
- if (have_msa(cpu_flags)) {
- c->put_no_rnd_pixels_l2 = ff_put_no_rnd_pixels_l2_msa;
-
- c->idct_add = ff_vp3_idct_add_msa;
- c->idct_put = ff_vp3_idct_put_msa;
- c->idct_dc_add = ff_vp3_idct_dc_add_msa;
- c->v_loop_filter = ff_vp3_v_loop_filter_msa;
- c->h_loop_filter = ff_vp3_h_loop_filter_msa;
- }
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Airline Manager How to Run a Successful Airline Business in 2023.md b/spaces/congsaPfin/Manga-OCR/logs/Airline Manager How to Run a Successful Airline Business in 2023.md
deleted file mode 100644
index bcf3da9d888bd702d06c38b7168c07f252227244..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Airline Manager How to Run a Successful Airline Business in 2023.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-
Airline Manager: A Career Guide
-
Have you ever wondered what it takes to run an airline? Do you have a passion for aviation and a knack for leadership? If so, you might be interested in becoming an airline manager. An airline manager is a professional who oversees all aspects of an airline's operations, from the maintenance of the aircrafts to the satisfaction of the customers. An airline manager is responsible for ensuring the safety, efficiency, and profitability of the airline, as well as managing a team of staff. It is a challenging but rewarding career that requires a combination of education, experience, skills, and certifications.
To become an airline manager, you typically need to have:
-
-
Education: A bachelor's degree in aviation management, business administration, finance, or a related field. Some airlines may prefer a master's degree or a professional diploma in aviation management.
-
Experience: At least five years of experience working in the aviation industry, preferably in a supervisory or managerial role.
Certifications: Depending on the position and the employer, you may need to have certain certifications or licenses, such as a pilot's license, an air traffic controller's license, or a certification from the International Air Transport Association (IATA).
-
-
You can also gain valuable experience and knowledge by completing an internship or a training program at an airline or an airport. This can help you develop network connections in the industry and learn about the best practices and trends in aviation management.
-
What are the duties and responsibilities of an airline manager?
-
An airline manager typically has a wide range of duties and responsibilities, which can include:
-
-
Managing and maintaining the fleet of aircrafts: An airline manager ensures that all aircrafts are in good condition and comply with safety standards. They also order new aircrafts when needed and oversee their delivery and installation.
-
Ensuring safety and compliance with regulations: An airline manager ensures that all operations follow the local and federal laws and regulations regarding aviation. They also implement safety procedures and policies for staff and passengers, including emergency responses to accidents or incidents.
-
Supervising and leading staff: An airline manager manages and coordinates a team of staff, including pilots, flight attendants, ground crews, mechanics, engineers, dispatchers, customer service representatives, and others. They provide guidance, feedback, training, evaluation, motivation, and discipline to their staff.
-
Planning and coordinating flight schedules and operations: An airline manager reviews and approves flight schedules to meet the needs of passengers and cargo shipments. They also monitor flight status, weather conditions, fuel consumption, baggage handling, security checks, boarding procedures, and other aspects of flight operations.
-
Developing and implementing policies and procedures: An airline manager develops and implements policies and procedures for the airline, such as pricing, marketing, customer service, quality assurance, human resources, and environmental sustainability. They also review and update existing policies and procedures to ensure they are effective and efficient.
-
Managing budgets and finances: An airline manager prepares and manages the budget for the airline, including revenue, expenses, profits, and losses. They also oversee the financial transactions, such as payroll, taxes, invoices, contracts, and audits. They also seek ways to reduce costs and increase revenue for the airline.
-
Handling customer service and complaints: An airline manager handles customer service and complaints from passengers, clients, partners, and regulators. They respond to inquiries, requests, feedback, suggestions, and complaints in a timely and professional manner. They also resolve issues and disputes that may arise during or after the flight.
-
-
What are the challenges and opportunities for an airline manager?
-
An airline manager faces many challenges and opportunities in their career, such as:
Dealing with unpredictable situations and emergencies: An airline manager has to deal with various situations and emergencies that may occur during the flight operations, such as bad weather, mechanical failures, security threats, medical emergencies, or natural disasters. They have to act quickly and calmly to ensure the safety of the staff and passengers.
-
Adapting to changing market conditions and customer demands: An airline manager has to adapt to the changing market conditions and customer demands in the aviation industry. They have to monitor the trends and competitors in the market and adjust their strategies accordingly. They also have to meet the expectations and needs of the customers and provide them with quality service and experience.
-
Leveraging technology and innovation to improve efficiency and performance: An airline manager has to leverage technology and innovation to improve the efficiency and performance of the airline. They have to use various tools and systems to manage and optimize the flight operations, such as software, databases, sensors, GPS, artificial intelligence, etc. They also have to explore new opportunities and solutions to enhance the airline's products and services.
-
Collaborating with other stakeholders in the industry: An airline manager has to collaborate with other stakeholders in the industry, such as airports, airlines, regulators, suppliers, contractors, media, etc. They have to establish and maintain good relationships with them and coordinate their activities and interests. They also have to negotiate contracts and agreements with them.
-
Pursuing professional development and career advancement: An airline manager has to pursue professional development and career advancement in their field. They have to keep up with the latest developments and innovations in the aviation industry. They also have to seek opportunities for learning new skills and knowledge. They can also advance their career by taking on higher-level positions or roles in the airline or other organizations.
-
-
Conclusion
-
An airline manager is a vital role in the aviation industry that requires a lot of education, experience, skills, and certifications. An airline manager oversees all aspects of an airline's operations, from the maintenance of the aircrafts to the satisfaction of the customers. An airline manager is responsible for ensuring the safety, efficiency, and profitability of the airline, as well as managing a team of staff. It is a challenging but rewarding career that offers many opportunities for growth and development.
-
If you are interested in becoming an airline manager, here are some tips and advice for you:
-
-
Do your research: Learn as much as you can about the aviation industry and the role of an airline manager. Read books, articles, blogs, reports, etc. about aviation management. You can also watch videos or listen to podcasts on this topic.
-
Get educated: Pursue a degree or a diploma in aviation management or a related field. Choose a reputable institution that offers quality education and training in this field. You can also take online courses or MOOCs on aviation management.
-
Gain experience: Seek opportunities to gain experience in the aviation industry. You can apply for internships or training programs at airlines or airports. You can also volunteer or work part-time at these places. You can also join clubs or organizations related to aviation.
-
Build your network: Connect with people who work in the aviation industry or have similar interests as you. You can attend events or seminars on aviation management. You can also join online forums or groups on this topic. You can also reach out to mentors or experts who can guide you in your career path.
-
Be prepared: Prepare yourself for the challenges and opportunities that come with being an airline manager. You have to be flexible, adaptable, resilient, <|im_end creative, and proactive. You have to be able to handle stress, pressure, and uncertainty. You have to be able to communicate, negotiate, present, and lead effectively. You have to be able to learn from your mistakes and improve your performance.
-
-
FAQs
-
Here are some frequently asked questions about airline managers:
-
-
-
Question
-
Answer
-
-
-
How much does an airline manager earn?
-
The average salary for an airline manager in the United States is $97,000 per year, according to Indeed.com. However, the salary may vary depending on the location, employer, experience, education, and skills of the airline manager.
-
-
-
What are the benefits of working as an airline manager?
-
Some of the benefits of working as an airline manager are:
You get to work in a dynamic and exciting industry that involves traveling and meeting different people and cultures.
You get to make a positive impact on the lives of millions of passengers and customers who use your airline's services.
You get to challenge yourself and grow professionally and personally by dealing with various situations and opportunities.
You get to enjoy perks and discounts from your employer, such as free or discounted flights, hotels, car rentals, etc.
-
-
-
What are the drawbacks of working as an airline manager?
-
Some of the drawbacks of working as an airline manager are:
You have to work long and irregular hours, including weekends, holidays, and nights.
You have to deal with a lot of stress, pressure, and responsibility that comes with managing an airline's operations.
You have to cope with frequent changes and uncertainties in the aviation industry, such as market fluctuations, customer preferences, technological innovations, etc.
You have to face competition and criticism from other airlines, regulators, media, customers, etc.
-
-
-
What are some of the best airlines to work for as an airline manager?
-
Some of the best airlines to work for as an airline manager are:
Delta Air Lines: Delta is one of the largest and most successful airlines in the world. It has a strong reputation for customer service, innovation, diversity, and social responsibility. It also offers competitive compensation and benefits for its employees.
Singapore Airlines: Singapore Airlines is one of the most awarded and respected airlines in the world. It has a high standard for quality, safety, and performance. It also offers a supportive and collaborative work environment for its employees.
Southwest Airlines: Southwest Airlines is one of the most popular and profitable airlines in the United States. It has a unique culture of fun, teamwork, and customer loyalty. It also offers generous perks and incentives for its employees.
-
-
-
How can I improve my chances of getting hired as an airline manager?
-
Some of the ways you can improve your chances of getting hired as an airline manager are:
Update your resume and cover letter to highlight your education, experience, skills, and achievements related to aviation management.
Prepare for the interview by researching the airline you are applying for, practicing common questions and scenarios, dressing professionally, and being confident and courteous.
Follow up with the employer by sending a thank-you note or email after the interview, expressing your interest and enthusiasm for the position.
Showcase your portfolio or samples of your work related to aviation management, such as reports, presentations, projects, etc.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Anger of Stick 5 Mod Apk Uang Tak Terbatas untuk Membeli Senjata dan Menghancurkan Zombie.md b/spaces/congsaPfin/Manga-OCR/logs/Download Anger of Stick 5 Mod Apk Uang Tak Terbatas untuk Membeli Senjata dan Menghancurkan Zombie.md
deleted file mode 100644
index 2e5cdc8694c20b2daeb908c1eb69175655b1af36..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Anger of Stick 5 Mod Apk Uang Tak Terbatas untuk Membeli Senjata dan Menghancurkan Zombie.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Download Anger of Stick 5 MOD APK Unlimited Money
-
Are you looking for a fun and exciting action game to play on your Android device? Do you want to enjoy unlimited money and gems, unlocked weapons and items, and no ads in your game? If yes, then you should download Anger of Stick 5 MOD APK right now!
-
download anger of stick 5 mod apk uang tak terbatas
Anger of Stick 5 is a popular action game with stickman characters. It is the fifth installment in the Anger of Stick series, which has over 100 million downloads worldwide. In this game, you have to fight against enemies who are trying to destroy the city and kidnap your friends. You can use various weapons, skills, and vehicles to defeat them and save the day.
-
Features of the game
-
Anger of Stick 5 has many features that make it an enjoyable and addictive game. Some of them are:
-
-
Simple and easy controls
-
Smooth and realistic animations
-
6 different game modes, including single-player, zombie mode, team battle, survival mode, etc.
-
Over 60 different weapons and items, such as guns, swords, axes, grenades, etc.
-
Over 10 different vehicles, such as helicopters, tanks, robots, etc.
-
Over 200 different enemies with different abilities and behaviors
-
Over 20 different allies who can help you in your missions
-
Customizable character appearance and skills
-
Leaderboards and achievements
-
-
Why download Anger of Stick 5 MOD APK?
-
Benefits of the modded version
-
While Anger of Stick 5 is a free game to download and play, it also has some limitations and drawbacks that can affect your gaming experience. For example, you have to earn money and gems by completing missions or watching ads, which can be time-consuming and annoying. You also have to unlock weapons and items by spending money and gems, which can be expensive and frustrating. And you have to deal with ads that pop up every now and then, which can be distracting and irritating.
-
That's why you should download Anger of Stick 5 MOD APK, which is a modified version of the original game that gives you many benefits and advantages. Some of them are:
-
Unlimited money and gems
-
With Anger of Stick 5 MOD APK, you don't have to worry about earning or spending money and gems anymore. You will have unlimited money and gems in your account from the start, which means you can buy anything you want without any restrictions. You can also upgrade your weapons and items to the maximum level without any hassle.
-
Download Anger of Stick 5 Mod Apk Unlimited Money
-Anger of Stick 5 Mod Apk Versi Terbaru Uang Tak Terbatas
-Cara Download Anger of Stick 5 Mod Apk di Android
-Anger of Stick 5 Mod Apk Zombie Mode Uang Tak Terbatas
-Review Game Anger of Stick 5 Mod Apk Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Offline Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Fitur Lengkap Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Tanpa Iklan Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Cheat Menu Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk dari Link Terpercaya
-Anger of Stick 5 Mod Apk Hack Gems Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk dengan Mudah dan Cepat
-Anger of Stick 5 Mod Apk Unlimited Health Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Gratis Tanpa Bayar
-Anger of Stick 5 Mod Apk Unlock All Characters Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk No Root Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Weapons Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Anti Banned Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Ammo Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Support All Devices
-Anger of Stick 5 Mod Apk Unlimited Coins Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Bahasa Indonesia Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Skills Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Update Terbaru Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Helicopter Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk High Damage Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Robots Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Full Version Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Friends Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk HD Graphics Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Gold Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Tanpa Verifikasi Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Diamonds Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Tanpa Password Uang Tak Terbatas
-Anger of Stick 5 Mod Apk Unlimited Energy Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Tanpa Kuota Internet
-Anger of Stick 5 Mod Apk Unlimited Level Up Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Tanpa Virus atau Malware
-Anger of Stick 5 Mod Apk Unlimited Combo Uang Tak Terbatas
-Download Anger of Stick 5 Mod Apk Tanpa Root atau Jailbreak
-
Unlocked weapons and items
-
With Anger of Stick 5 MOD APK, you don't have to wait or work hard to unlock weapons and items anymore. You will have access to all the weapons and items in the game from the start, which means you can choose any weapon and item you like without any limitations. You can also try out different combinations of weapons and items to suit your playstyle and preferences.
-
No ads and no root required
-
With Anger of Stick 5 MOD APK, you don't have to deal with ads and root anymore. You will not see any ads in the game, which means you can enjoy the game without any interruptions or distractions. You also don't need to root your device to install the mod apk, which means you can avoid any risks or complications that come with rooting.
-
How to download and install the mod apk
-
Downloading and installing Anger of Stick 5 MOD APK is very easy and simple. Just follow these steps:
-
Steps to follow
-
-
Click on the download button below to download the mod apk file.
-
Allow unknown sources in your device settings to install the mod apk.
-
Locate and tap on the downloaded mod apk file to start the installation process.
-
Wait for a few seconds until the installation is complete.
-
Launch the game and enjoy unlimited money and gems, unlocked weapons and items, and no ads.
-
-
Tips and tricks
-
Here are some tips and tricks that can help you play Anger of Stick 5 better:
-
-
Use different weapons and items depending on the situation and the enemy type. For example, use guns for long-range attacks, swords for close-range attacks, grenades for crowd control, etc.
-
Upgrade your weapons and items regularly to increase their damage, durability, and effectiveness.
-
Use your skills wisely to deal more damage, heal yourself, or boost your allies. For example, use the rage skill to increase your attack power, use the heal skill to restore your health, or use the summon skill to call an ally to help you.
-
Play different game modes to challenge yourself and earn more rewards. For example, play zombie mode to fight against hordes of zombies, play team battle mode to cooperate with other players, or play survival mode to test your endurance.
-
Invite your friends to play with you online and have more fun. You can chat with them, share your strategies, and compete with them on the leaderboards.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, Anger of Stick 5 is a popular action game with stickman characters that lets you fight against enemies who are trying to destroy the city and kidnap your friends. You can use various weapons, skills, and vehicles to defeat them and save the day. You can also download Anger of Stick 5 MOD APK to enjoy unlimited money and gems, unlocked weapons and items, and no ads in your game. You can also download and install the mod apk easily by following some simple steps. You can also play different game modes, upgrade your weapons and items, use your skills wisely, and invite your friends to play with you online.
-
Call to action
-
If you are ready to download Anger of Stick 5 MOD APK and have a blast with this amazing action game, then don't wait any longer. Click on the download button below and start your adventure now!
-
FAQs
-
Is Anger of Stick 5 MOD APK safe to use?
-
Yes, Anger of Stick 5 MOD APK is safe to use. It is tested by our team of experts and verified by many users. It does not contain any viruses, malware, or spyware that can harm your device or data. It also does not require any permissions that can compromise your privacy or security.
-
What are the minimum requirements for Anger of Stick 5 MOD APK?
-
The minimum requirements for Anger of Stick 5 MOD APK are:
-
-
An Android device running Android 4.1 or higher
-
A stable internet connection
-
At least 100 MB of free storage space
-
A compatible device that can run the game smoothly
-
-
How to update Anger of Stick 5 MOD APK?
-
To update Anger of Stick 5 MOD APK, you have to follow these steps:
-
-
Delete the old version of the mod apk from your device.
-
Download the latest version of the mod apk from our website.
-
Install the new version of the mod apk following the same steps as before.
-
Enjoy the updated features and improvements in the game.
-
-
How to play Anger of Stick 5 with friends online?
-
To play Anger of Stick 5 with friends online, you have to follow these steps:
-
-
Launch the game and tap on the online mode icon on the main menu.
-
Select the game mode you want to play, such as team battle, zombie mode, etc.
-
Invite your friends to join your room by tapping on the invite button and sending them a link or a code.
-
Wait for your friends to accept your invitation and join your room.
-
Start the game and have fun with your friends.
-
-
Where can I find more modded games like Anger of Stick 5?
-
If you are looking for more modded games like Anger of Stick 5, you can visit our website and browse through our collection of modded games. We have modded games for various genres, such as action, adventure, arcade, puzzle, racing, simulation, sports, etc. You can download and install them easily and enjoy unlimited features and benefits in your games.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download CarX Street Lite APK and Race in the Open World of Sunset City.md b/spaces/congsaPfin/Manga-OCR/logs/Download CarX Street Lite APK and Race in the Open World of Sunset City.md
deleted file mode 100644
index 9e09197879d55a7f5b3068140128fc5e0777fd02..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download CarX Street Lite APK and Race in the Open World of Sunset City.md
+++ /dev/null
@@ -1,112 +0,0 @@
-
-
CarX Street Lite APK: A New Way to Experience Street Racing on Android
-
If you are a fan of car racing games, you might have heard of CarX Street Lite APK, a new game that lets you enjoy the thrill of street racing in a dynamic open world. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, why you should play it, some tips and tricks for playing it, how it compares with other car racing games, and some frequently asked questions.
-
What is CarX Street Lite APK?
-
A brief introduction to the game and its features
-
CarX Street Lite APK is an open beta test version of CarX Street, a game developed by CarX Technologies, the makers of CarX Drift Racing 2. It is a realistic street racing game that features high-quality graphics, physics-based car behavior, part tuning, car customization, club battles, boss races, and more. You can choose from a variety of cars, from classic muscle cars to modern supercars, and upgrade them with different parts and accessories. You can also explore a huge open world with different locations, such as highways, city streets, industrial zones, airports, and more. You can race against other players or AI opponents in different modes, such as sprint races, drift races, time trials, drag races, etc.
How to download and install the game on your device
-
Since CarX Street Lite APK is an open beta test version, it is not available on the Google Play Store. However, you can download it from other sources, such as . To install it on your device, you need to follow these steps:
-
-
Download the APK file from or another trusted source.
-
Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
-
Locate the downloaded APK file on your device and tap on it.
-
Follow the instructions on the screen to complete the installation.
-
Launch the game and enjoy!
-
-
Why You Should Play CarX Street Lite APK
-
The benefits of playing an open beta test game
-
By playing CarX Street Lite APK, you are not only having fun but also helping the developers improve the game and provide feedback. As an open beta tester, you can access the game before its official release and enjoy its features for free. You can also report any bugs, glitches, or suggestions to the developers through the game's official website or social media pages . By doing so, you can help them improve the game and make it more enjoyable for everyone.
-
The realistic graphics and physics of the game
-
One of the main attractions of CarX Street Lite APK is its stunning graphics and physics. The game uses the CarX engine, which is known for its realistic car behavior and simulation. The game also features high-quality 3D models, textures, lighting, shadows, reflections, and effects. You can see the details of your car, such as the paint, scratches, dirt, smoke, sparks, etc. You can also see the environment around you, such as the buildings, trees, roads, signs, etc. The game also supports different weather conditions, such as sunny, cloudy, rainy, foggy, etc. The game runs smoothly on most devices and has adjustable graphics settings to suit your preferences.
-
The variety of cars, parts, and customization options
-
Another reason to play CarX Street Lite APK is its wide range of cars, parts, and customization options. The game offers over 50 cars from different categories, such as muscle cars, sports cars, supercars, etc. You can unlock them by completing races, earning money, or buying them with real money. You can also upgrade them with different parts, such as engines, transmissions, suspensions, brakes, tires, etc. You can tune them to suit your driving style and performance needs. You can also customize them with different colors, decals, stickers, rims, spoilers, hoods, bumpers, etc. You can create your own unique car and show it off to other players.
-
The dynamic open world and different race modes
-
The last reason to play CarX Street Lite APK is its dynamic open world and different race modes. The game lets you explore a huge open world with different locations and terrains. You can drive around freely and discover hidden spots and secrets. You can also interact with other players and join clubs or races. The game has different race modes to challenge your skills and compete with others. You can choose from sprint races, drift races, time trials, drag races, etc. You can also join club battles and challenge bosses for rewards and reputation. The game has a ranking system that tracks your progress and achievements.
-
Tips and Tricks for Playing CarX Street Lite APK
-
How to join clubs and challenge bosses
-
One of the features of CarX Street Lite APK is the club system. You can join a club or create your own club with other players. By joining a club, you can chat with other members, . You can adjust the values of each parameter by using the sliders or the buttons. You can also see the changes in your car's stats, such as power, torque, weight, acceleration, top speed, etc. You can also test your car's performance by using the test drive option. You can tune your car for different races depending on the track, weather, and mode. For example, you can increase your engine power and top speed for sprint races, lower your suspension and increase your tire grip for drift races, or balance your acceleration and braking for time trials.
-
How to drift and use nitro boost
-
One of the skills that you need to master in CarX Street Lite APK is drifting. Drifting is a technique that involves sliding your car sideways around corners and curves. Drifting can help you maintain your speed and momentum, as well as earn you more points and money. To drift, you need to press and hold the drift button while turning your car. You can also use the handbrake button to initiate a drift. You need to control your steering and throttle to maintain your drift angle and balance. You can also use the nitro boost to increase your speed and power while drifting. Nitro boost is a feature that allows you to temporarily boost your car's performance by using a special fuel. To use nitro boost, you need to press and hold the nitro button. You can see your nitro gauge on the screen, which shows how much nitro you have left. You can refill your nitro gauge by drifting, overtaking, or performing stunts.
-
carx street game download
-carx street open world racing
-carx street mod apk unlimited money
-carx street beta test
-carx street drift racing 2
-carx street apk + obb
-carx street android 9.0+
-carx street sunset city
-carx street realistic physics
-carx street part tuning
-carx street visual customization
-carx street latest version
-carx street free download
-carx street xapk file
-carx street online multiplayer
-carx street offline mode
-carx street dynamic day/night
-carx street career mode
-carx street clubs and bosses
-carx street houses and collections
-carx street gas stations and fuel
-carx street engine swap
-carx street high-quality graphics
-carx street enormous open world
-carx street support service
-carx street license agreement
-carx street privacy policy
-carx street official site
-carx street google play id
-carx street category racing
-carx street developer CarX Technologies, LLC
-carx street installs 1,000,000+
-carx street app size 1.19 GB
-carx street old versions
-carx street update Jun 9, 2023
-carx street net energy gain
-carx street mini sun experiment
-carx street top speed or drift
-carx street buy houses for cars
-carx street detailed car-building system
-carx street mirrors, headlights, lights, skirt, bumper, rims, etc.
-carx street impressive physics and controls
-carx street master of your car
-carx street exhilarating races
-carx street convection zone temperature 5800K
-
How to earn money and buy houses for your cars
-
The last tip that we have for you is how to earn money and buy houses for your cars. Money is the main currency in CarX Street Lite APK, which you can use to buy new cars, parts, customization items, houses, etc. You can earn money by completing races, challenges, events, etc. You can also earn money by drifting, overtaking, or performing stunts. You can also get money by watching ads or buying it with real money. Houses are another feature in CarX Street Lite APK, which allow you to store and display your cars. You can buy houses with different sizes, styles, and locations. You can also decorate your houses with furniture, paintings, plants, etc. You can access your houses by using the map or the house menu.
-
Comparison of CarX Street Lite APK with Other Car Racing Games
-
A table showing the similarities and differences between CarX Street Lite APK and other popular car racing games on Android
-
-
-
Game
-
Similarities
-
Differences
-
-
-
CarX Street Lite APK
-
- Realistic graphics and physics - Part tuning and car customization - Dynamic open world and different race modes - Club battles and boss races
-
- Open beta test version - Focus on street racing - CarX engine - Houses for cars
-
-
-
Asphalt 9: Legends
-
- High-quality graphics and effects - Variety of cars and customization options - Different locations and terrains - Multiplayer mode and events
-
- Arcade-style gameplay - Focus on stunts and nitro - Gameloft engine - Career mode and clubs
-
-
-
Real Racing 3
-
- Realistic graphics and physics - Variety of cars and parts - Different locations and tracks - Multiplayer mode and events
-
- Simulation-style gameplay - Focus on real-world racing - EA engine - Time-shifted multiplayer mode
-
-
-
Need for Speed: No Limits
-
- High-quality graphics and effects - Part tuning and car customization - Different locations and terrains - Multiplayer mode and events
-
- Arcade-style gameplay - Focus on underground racing - EA engine - Story mode and factions
-
-
-
CSR Racing 2
Google Play Store: https://play.google.com/store/apps/details?id=com.carxtech.streetlite
-
Reddit: https://www.reddit.com/r/CarXStreet/
-
-
I hope you enjoyed this article and learned something new about CarX Street Lite APK. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy racing!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/PES 2023 APK Hack The Ultimate Guide for Android Users.md b/spaces/congsaPfin/Manga-OCR/logs/PES 2023 APK Hack The Ultimate Guide for Android Users.md
deleted file mode 100644
index ea8dcafb8914e9df04430c717b23d4b7d7b30fbf..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/PES 2023 APK Hack The Ultimate Guide for Android Users.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
How to Hack PES 2023 APK and Enjoy Unlimited Coins and GP
-
If you are a fan of soccer games, you must have heard of PES 2023 APK, the latest installment of the popular eFootball series from Konami. This game offers you a realistic and immersive experience of playing soccer on your mobile device, with stunning graphics, smooth gameplay, and various modes to choose from. However, if you want to enjoy the game to the fullest, you might need to hack PES 2023 APK and get unlimited coins and GP, which are the in-game currencies that allow you to buy players, kits, stadiums, and more. In this article, we will show you what is PES 2023 APK, why you need to hack it, and how to hack it safely and easily.
PES 2023 APK is the Android version of eFootball PES 2023, the latest edition of the Pro Evolution Soccer series developed by Konami. This game is one of the most popular and realistic soccer games on the market, competing with FIFA Mobile from EA Sports. PES 2023 APK features licensed teams, players, leagues, and tournaments from around the world, as well as original content created by Konami. You can play solo or with friends online, in various modes such as Matchday, Master League, MyClub, eFootball League, and more.
-
Features of PES 2023 APK
-
PES 2023 APK has many features that make it stand out from other soccer games. Here are some of them:
-
Graphics and Sound
-
PES 2023 APK uses Unreal Engine 5, which is a powerful game engine that delivers stunning graphics and animations. The game also uses motion capture technology to capture the movements and expressions of real players, making them look more lifelike and realistic. The game also has high-quality sound effects and commentary, creating an immersive atmosphere for the players.
-
Gameplay and Modes
-
PES 2023 APK has smooth and responsive gameplay, with intuitive controls and realistic physics. The game also has various modes to suit different preferences and skill levels. You can play quick matches, tournaments, leagues, or cups in Matchday mode; create your own team and manage it in Master League mode; build your dream squad and compete with others in MyClub mode; join a club and participate in online competitions in eFootball League mode; or create your own custom matches in Local mode.
-
efootball pes 2023 mod apk unlimited money
-pes 2023 apk obb download for android
-pes 2023 mobile hack coins and gp
-how to install pes 2023 apk + data
-pes 2023 apk offline mode
-pes 2023 mod apk latest version
-pes 2023 apk free download full version
-pes 2023 mobile hack no human verification
-pes 2023 apk + data highly compressed
-pes 2023 apk online multiplayer
-pes 2023 mod apk android 1
-pes 2023 mobile hack tool
-pes 2023 apk + obb file download
-pes 2023 apk cracked version
-pes 2023 mod apk rexdl
-pes 2023 mobile hack without survey
-pes 2023 apk + data zip file
-pes 2023 apk unlimited everything
-pes 2023 mod apk revdl
-pes 2023 mobile hack generator
-pes 2023 apk + obb offline download
-pes 2023 apk full unlocked
-pes 2023 mod apk pure
-pes 2023 mobile hack apk download
-pes 2023 apk + data mega link
-pes 2023 apk modded version
-pes 2023 mod apk happymod
-pes 2023 mobile hack ios
-pes 2023 apk + obb google drive link
-pes 2023 apk premium version
-pes 2023 mod apk apkpure
-pes 2023 mobile hack online
-pes 2023 apk + data direct download link
-pes 2023 apk pro version
-pes 2023 mod apk unlimited coins and gp
-pes 2023 mobile hack android
-pes 2023 apk + obb mediafire link
-pes 2023 apk original version
-pes 2023 mod apk offline download
-pes 2023 mobile hack no root
-pes 2023 apk + data download for pc
-pes 2023 apk vip version
-pes 2023 mod apk unlimited money and gp
-pes 2023 mobile hack cheat engine
-pes 2023 apk + obb download for ios
-pes 2023 apk hacked version download
-efootball PES \uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001\uE000\uE001 Mod Apk (Hack Unlimited Money) v - APKYLO[^1^]
-
Online and Offline Options
-
PES 2023 APK allows you to play online or offline, depending on your internet connection and mood. You can play online with other players from around the world, using either Wi-Fi or mobile data. You can also play offline against AI opponents or with friends using Bluetooth or local Wi-Fi. The game also supports cross-platform play, meaning you can play with players who use different devices such as PC or console.
-
Why You Need to Hack PES 2023 APK
-
As much as PES 2023 APK is fun and enjoyable, it also has some limitations that might hinder your progress and satisfaction. One of these limitations is the lack of coins and GP, which are the in-game currencies that you need to buy players, kits, stadiums, and other items. Coins and GP are earned by playing matches, completing tasks, or watching ads, but they are not enough to get the best players and items. You can also buy coins and GP with real money, but that can be expensive and risky. That's why you might want to hack PES 2023 APK and get unlimited coins and GP for free.
-
Benefits of Hacking PES 2023 APK
-
Hacking PES 2023 APK can give you many benefits that can enhance your gaming experience. Here are some of them:
-
Unlock All Players and Teams
-
By hacking PES 2023 APK, you can unlock all the players and teams in the game, including the legends, the icons, and the special editions. You can also get the latest transfers and updates for the players and teams. This way, you can have your favorite players and teams in your squad, and play with them in any mode you want.
-
Customize Your Squad and Tactics
-
By hacking PES 2023 APK, you can customize your squad and tactics according to your preference and strategy. You can change the formation, the positions, the roles, the skills, and the attributes of your players. You can also equip them with the best kits, boots, balls, and accessories. You can also adjust the difficulty level, the match length, the weather conditions, and the stadium settings. This way, you can create your own unique squad and tactics that suit your style and goals.
-
Boost Your Performance and Skills
-
By hacking PES 2023 APK, you can boost your performance and skills in the game. You can increase your coins and GP balance, which allows you to buy more items and upgrades. You can also increase your energy level, which allows you to play more matches without getting tired. You can also increase your stats, such as speed, stamina, strength, accuracy, dribbling, passing, shooting, defending, and more. This way, you can improve your skills and abilities in the game, and dominate your opponents.
-
Risks of Hacking PES 2023 APK
-
However, hacking PES 2023 APK also has some risks that you need to be aware of. Here are some of them:
-
Ban from Online Servers
-
By hacking PES 2023 APK, you might get banned from the online servers of the game. This means that you will not be able to play online with other players or join online competitions. You will also lose all your progress and achievements in the game. Konami has a strict policy against hacking and cheating in their games, and they monitor the online activities of the players. If they detect any suspicious or abnormal behavior from your account, they will ban you without warning or mercy.
-
Malware and Viruses
-
By hacking PES 2023 APK, you might get malware and viruses on your device. This is because most of the hacking tools or modded versions of PES 2023 APK are not safe or reliable. They might contain malicious codes or programs that can harm your device or steal your personal information. They might also cause errors or crashes on your device or game. Therefore, you need to be careful when downloading or installing any hacking tool or modded version of PES 2023 APK.
-
Legal Issues and Consequences
-
By hacking PES 2023 APK, you might face legal issues and consequences. This is because hacking PES 2023 APK is illegal and unethical. It violates the terms of service and the intellectual property rights of Konami. It also ruins the fair play and the fun of the game for other players. Therefore, you might face legal actions or lawsuits from Konami or other parties if they find out that you hacked PES 2023 APK.
-
How to Hack PES 2023 APK Safely and Easily
-
If you still want to hack PES 2023 APK despite the risks involved, you need to do it safely and easily. Here are some steps that you can follow:
-
Steps to Hack PES 2023 APK
-
Download a Modded Version of PES 2023 APK
-
The first step is to download a modded version of PES 2023 APK that has unlimited coins and GP features. You can find many websites or sources that offer such modded versions of PES 2023 APK online. However, you need to be careful when choosing one, as some of them might be fake or harmful. You need to check the reviews, ratings, and feedbacks of other users who have downloaded the modded version of PES 2023 APK. You also need to check the security and compatibility of the modded version of PES 2023 APK with your device and game version. You can use a reliable antivirus or anti-malware software to scan the modded version of PES 2023 APK before downloading or installing it.
-
Install the Modded APK on Your Device
-
The second step is to install the modded version of PES 2023 APK on your device. You need to uninstall the original version of PES 2023 APK first, if you have it on your device. Then, you need to enable the unknown sources option on your device settings, which allows you to install apps from sources other than the Google Play Store. After that, you need to locate the modded version of PES 2023 APK file on your device storage, and tap on it to install it. You might need to grant some permissions or accept some terms and conditions during the installation process.
-
Launch the Game and Enjoy the Hack
-
The third and final step is to launch the game and enjoy the hack. You will see that you have unlimited coins and GP in your account, which you can use to buy anything you want in the game. You can also access all the players and teams in the game, and customize your squad and tactics as you wish. You can also play any mode you want, online or offline, without any restrictions or limitations. However, you need to be careful when playing online, as you might get detected or reported by other players or Konami. You also need to update the modded version of PES 2023 APK regularly, as new versions of the game might come out with new features or bug fixes.
-
Conclusion
-
PES 2023 APK is a great soccer game that offers you a realistic and immersive experience of playing soccer on your mobile device. However, if you want to enjoy the game to the fullest, you might need to hack PES 2023 APK and get unlimited coins and GP, which are the in-game currencies that allow you to buy players, kits, stadiums, and more. In this article, we showed you what is PES 2023 APK, why you need to hack it, and how to hack it safely and easily. We hope that this article was helpful and informative for you. However, we do not encourage or endorse hacking or cheating in any game, as it is illegal and unethical. It also ruins the fair play and the fun of the game for other players. Therefore, we advise you to play PES 2023 APK without hacking it, and enjoy it as it is meant to be enjoyed.
-
FAQs
-
Here are some frequently asked questions about hacking PES 2023 APK:
-
Q: Is hacking PES 2023 APK legal?
-
A: No, hacking PES 2023 APK is not legal. It violates the terms of service and the intellectual property rights of Konami. It also violates the laws and regulations of your country or region. Therefore, hacking PES 2023 APK can lead to legal issues and consequences.
-
Q: Is hacking PES 2023 APK safe?
-
A: No, hacking PES 2023 APK is not safe. It can expose your device or account to malware or viruses, which can harm your device or steal your personal information. It can also cause errors or crashes on your device or game. It can also get you banned from the online servers of the game, which means that you will lose all your progress and achievements in the game.
-
Q: Is hacking PES 2023 APK easy?
-
A: Yes, hacking PES 2023 APK is easy if you follow the steps that we provided in this article. However, you need to be careful when choosing a modded version of PES 2023 APK, as some of them might be fake or harmful. You also need to check the security and compatibility of the modded version of PES 2023 APK with your device and game version.
-
Q: Is hacking PES 2023 APK worth it?
-
A: No, hacking PES 2023 APK is not worth it. It might give you some benefits such as unlimited coins and GP, but it also has many risks such as ban from online servers, malware and viruses, legal issues and consequences. It also ruins the fair play and the fun of the game for other players. Therefore, hacking PES 2023 APK is not worth it, and we advise you to play the game without hacking it.
-
Q: How can I play PES 2023 APK without hacking it?
-
A: You can play PES 2023 APK without hacking it by following these tips:
-
-
Play regularly and complete tasks to earn coins and GP.
-
Watch ads or participate in surveys to get free coins and GP.
-
Use the scout feature to find and sign players that suit your budget and needs.
-
Use the training feature to improve the skills and abilities of your players.
-
Use the strategy feature to adjust your formation, positions, roles, and tactics according to your opponents and situations.
-
Use the tips and tricks feature to learn more about the game and its features.
-
Join a club or a community to get support and advice from other players.
-
-
By following these tips, you can play PES 2023 APK without hacking it, and enjoy it as it is meant to be enjoyed.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Revit Sandwich Panels Tips and Tricks for Using Them in Your Project.md b/spaces/congsaPfin/Manga-OCR/logs/Revit Sandwich Panels Tips and Tricks for Using Them in Your Project.md
deleted file mode 100644
index f663a2dc85b1a7e36fa8563677c4e8618d47c0db..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Revit Sandwich Panels Tips and Tricks for Using Them in Your Project.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
Download Sandwich Panel Revit: A Guide for Architects and Engineers
-
If you are looking for a way to create high-performance, energy-efficient, and aesthetically pleasing buildings, you might want to consider using sandwich panels. Sandwich panels are composite structures that consist of three layers: a low-density core and a thin skin-layer bonded to each side. They are widely used in various applications, such as roofs, walls, cold storage, acoustic insulation, and more.
-
In this article, we will explain what sandwich panels are, what types and advantages they have, and how you can download sandwich panel Revit models for your projects. We will also show you how to use sandwich panel Revit models in your design and documentation process.
Sandwich panels are a type of building material that combines multiple layers of different materials to create a unique composite structure. The core layer provides thermal insulation, acoustic insulation, fire resistance, and mechanical strength, while the skin layers provide protection, durability, and aesthetics. Sandwich panels can be made of various materials, such as polyurethane (PUR), polystyrene (EPS), mineral wool, aluminum, steel, fiberglass, wood, etc.
-
Sandwich panel definition
-
According to Wikipedia, a sandwich panel is "any structure made of three layers: a low-density core (PIR, mineral wool, XPS), and a thin skin-layer bonded to each side. Sandwich panels are used in applications where a combination of high structural rigidity and low weight is required." According to Merriam-Webster, a sandwich panel is "a structural panel material fabricated by bonding several laminations."
-
Sandwich panel types
-
There are many types of sandwich panels available on the market, depending on the core material, the skin material, the profile type, the coating type, and the color. Some of the common types of sandwich panels are:
-
-
PUR/PIR sandwich panels: These panels have a core made of polyurethane or polyisocyanurate foam, which offers excellent thermal insulation and fire resistance. They are suitable for roofs and walls in various buildings.
-
EPS sandwich panels: These panels have a core made of expanded polystyrene foam, which is lightweight and economical. They are mainly used for cold storage and refrigeration facilities.
-
Mineral wool sandwich panels: These panels have a core made of mineral wool fibers, which provide good acoustic insulation and fire resistance. They are ideal for industrial buildings that require noise reduction and fire safety.
-
ACP sandwich panels: These panels have a core made of aluminum composite material (ACM), which consists of two thin aluminum sheets bonded to a plastic core. They are used for architectural cladding and signage applications.
-
-
Sandwich panel advantages
-
Sandwich panels offer many advantages over traditional building materials, such as:
-
-
Lightweight: Sandwich panels are lightweight, which makes them easy to handle, transport, and install. They also reduce the load on the foundation and the structure of the building.
-
Thermal insulation: Sandwich panels have high thermal resistance values, which help reduce heat loss and energy consumption in buildings. They also prevent condensation and moisture problems.
-
Acoustic insulation: Sandwich panels have good sound absorption properties, which help reduce noise transmission and improve the acoustic comfort in buildings.
-
Mechanical strength: Sandwich panels have high bending stiffness and load-bearing capacity, which enable them to span large distances without additional support.
-
Durability: Sandwich panels are resistant to corrosion, weathering, UV rays, and impact. They also have a long service life and low maintenance costs.
-
Aesthetics: Sandwich panels come in various colors, textures, and finishes, which allow for a wide range of design possibilities and architectural expressions.
-
-
How to download sandwich panel Revit models?
-
If you want to use sandwich panels in your Revit projects, you will need to download sandwich panel Revit models from reliable sources. There are several platforms and software that offer free or paid access to sandwich panel Revit models, such as:
-
BIMobject: a platform for free Revit walls - sandwich panels
-
BIMobject is one of the largest and most popular platforms for downloading BIM objects and Revit families. It has a wide collection of sandwich panel Revit models from various manufacturers and brands, such as Kingspan, Ruukki, Metecno, etc. You can browse, filter, and download sandwich panel Revit models for free from BIMobject's website or app. You can also view the technical specifications, ratings, and reviews of each sandwich panel Revit model before downloading it.
AGACAD: a software for sandwich panel design and installation
-
AGACAD is a software company that develops tools and solutions for BIM and Revit. One of its products is Smart Assemblies, which is a software for creating and managing complex assemblies in Revit. Smart Assemblies can be used to design and install sandwich panels in Revit, as well as generate shop drawings, schedules, and reports. You can download a free trial of Smart Assemblies from AGACAD's website or purchase a license for the full version.
-
Other sources of sandwich panel Revit models
-
There are also other sources of sandwich panel Revit models that you can explore, such as:
-
-
RevitCity: a community website for Revit users that offers free downloads of Revit families and objects, including sandwich panels.
-
CADdetails: a platform for connecting manufacturers and designers that provides free downloads of CAD drawings and BIM models, including sandwich panels.
-
Manufacturer websites: some sandwich panel manufacturers provide their own Revit models on their websites or catalogs, which you can download directly or request by email.
-
-
How to use sandwich panel Revit models in your projects?
-
Once you have downloaded the sandwich panel Revit models that you need, you can use them in your projects by following these steps:
-
Importing and placing sandwich panels in Revit
-
To import and place sandwich panels in Revit, you need to do the following:
-
-
Open your Revit project and go to the Insert tab.
-
Click on Load Family and browse to the folder where you saved the sandwich panel Revit model.
-
Select the sandwich panel Revit model and click on Open.
-
The sandwich panel will appear in the Project Browser under Families - Walls - Sandwich Panels.
-
Select the sandwich panel from the Project Browser and drag it to the drawing area.
-
Use the Modify tools to adjust the position, orientation, height, width, and alignment of the sandwich panel.
-
-
Modifying and customizing sandwich panels in Revit
-
To modify and customize sandwich panels in Revit, you need to do the following:
-
-
Select the sandwich panel that you want to modify and go to the Properties palette.
-
Under Type Properties, you can change the parameters of the sandwich panel, such as core material, skin material, profile type, coating type, color, etc.
-
You can also create new types of sandwich panels by clicking on Duplicate and renaming them.
-
If you want to edit the geometry of the sandwich panel, you can click on Edit Family to open the Family Editor.
-
In the Family Editor, you can use the Sketch tools to modify the shape and size of the sandwich panel.
-
You can also use the Reference Planes, Dimensions, Parameters, Constraints, Formulas, and Families to create more complex and parametric sandwich panels.
-
When you are done, click on Load into Project to save the changes and update the sandwich panel in your project.
-
-
Generating documentation and reports from sandwich panel Revit models
-
To generate documentation and reports from sandwich panel Revit models, you need to do the following:
-
-
Create views of your project that show the sandwich panels in different perspectives, such as floor plans, elevations, sections, 3D views, etc.
-
Add annotations, dimensions, tags, symbols, and notes to your views to provide more information and clarity about the sandwich panels.
-
Create sheets and place your views on them. You can also add title blocks, legends, schedules, and other elements to your sheets.
-
Create schedules of your sandwich panels that display their properties, quantities, costs, and other data. You can also use filters, sorting, grouping, and formulas to organize and customize your schedules.
-
Export your sheets and schedules as PDF, DWG, DWF, or other formats to share them with your clients, contractors, or stakeholders.
-
-
Conclusion
-
Sandwich panels are a versatile and efficient building material that can enhance the performance and appearance of your buildings. They offer many benefits, such as thermal insulation, acoustic insulation, mechanical strength, durability, and aesthetics. You can download sandwich panel Revit models from various sources, such as BIMobject, AGACAD, RevitCity, CADdetails, or manufacturer websites. You can also use sandwich panel Revit models in your projects by importing, placing, modifying, customizing, and documenting them in Revit.
-
We hope this article has helped you understand how to download sandwich panel Revit models and use them in your projects. If you have any questions or feedback, please feel free to contact us or leave a comment below.
-
FAQs
-
Here are some frequently asked questions about sandwich panel Revit models:
-
-
What is the difference between a sandwich panel and a curtain wall?
-
A sandwich panel is a composite structure that consists of three layers: a low-density core and a thin skin-layer bonded to each side. A curtain wall is a non-structural system that covers the exterior of a building and consists of glass panels or other materials supported by a metal frame.
-
How do I change the thickness of a sandwich panel in Revit?
-
To change the thickness of a sandwich panel in Revit, you need to select the sandwich panel and go to the Properties palette. Under Type Properties, you can change the value of the Thickness parameter. Alternatively, you can edit the family of the sandwich panel and modify the thickness of each layer in the Family Editor.
-
How do I create a custom sandwich panel in Revit?
-
To create a custom sandwich panel in Revit, you need to use the Family Editor. You can either start from scratch or duplicate an existing sandwich panel family. You can use the Sketch tools to draw the profile of the sandwich panel and assign materials to each layer. You can also use Reference Planes, Dimensions, Parameters, Constraints, Formulas, and Families to make your sandwich panel more complex and parametric.
-
How do I calculate the U-value of a sandwich panel in Revit?
-
To calculate the U-value of a sandwich panel in Revit, you need to use the Analyze tab. Under Energy Analysis - Building Elements - Thermal Properties - Analytical Construction Library - Walls - Sandwich Panels - Edit/Create New Construction - Edit Thermal Properties - Calculate U-Factor from Layers. You can then enter the properties of each layer of the sandwich panel and click on Calculate U-Factor.
-
How do I install a sandwich panel in Revit?
-
To install a sandwich panel in Revit, you need to use the Modify tools. You can use the Align tool to align the edges of the sandwich panel with the reference planes or elements. You can use the Move tool to move the sandwich panel to the desired location. You can use the Rotate tool to rotate the sandwich panel around an axis. You can use the Mirror tool to create a symmetrical copy of the sandwich panel. You can use the Array tool to create multiple copies of the sandwich panel along a path or direction.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/deform_conv.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/deform_conv.py
deleted file mode 100644
index a3f8c75ee774823eea334e3b3732af6a18f55038..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/deform_conv.py
+++ /dev/null
@@ -1,405 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Tuple, Union
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import Tensor
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair, _single
-
-from annotator.uniformer.mmcv.utils import deprecated_api_warning
-from ..cnn import CONV_LAYERS
-from ..utils import ext_loader, print_log
-
-ext_module = ext_loader.load_ext('_ext', [
- 'deform_conv_forward', 'deform_conv_backward_input',
- 'deform_conv_backward_parameters'
-])
-
-
-class DeformConv2dFunction(Function):
-
- @staticmethod
- def symbolic(g,
- input,
- offset,
- weight,
- stride,
- padding,
- dilation,
- groups,
- deform_groups,
- bias=False,
- im2col_step=32):
- return g.op(
- 'mmcv::MMCVDeformConv2d',
- input,
- offset,
- weight,
- stride_i=stride,
- padding_i=padding,
- dilation_i=dilation,
- groups_i=groups,
- deform_groups_i=deform_groups,
- bias_i=bias,
- im2col_step_i=im2col_step)
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- weight,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deform_groups=1,
- bias=False,
- im2col_step=32):
- if input is not None and input.dim() != 4:
- raise ValueError(
- f'Expected 4D tensor as input, got {input.dim()}D tensor \
- instead.')
- assert bias is False, 'Only support bias is False.'
- ctx.stride = _pair(stride)
- ctx.padding = _pair(padding)
- ctx.dilation = _pair(dilation)
- ctx.groups = groups
- ctx.deform_groups = deform_groups
- ctx.im2col_step = im2col_step
-
- # When pytorch version >= 1.6.0, amp is adopted for fp16 mode;
- # amp won't cast the type of model (float32), but "offset" is cast
- # to float16 by nn.Conv2d automatically, leading to the type
- # mismatch with input (when it is float32) or weight.
- # The flag for whether to use fp16 or amp is the type of "offset",
- # we cast weight and input to temporarily support fp16 and amp
- # whatever the pytorch version is.
- input = input.type_as(offset)
- weight = weight.type_as(input)
- ctx.save_for_backward(input, offset, weight)
-
- output = input.new_empty(
- DeformConv2dFunction._output_size(ctx, input, weight))
-
- ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
-
- cur_im2col_step = min(ctx.im2col_step, input.size(0))
- assert (input.size(0) %
- cur_im2col_step) == 0, 'im2col step must divide batchsize'
- ext_module.deform_conv_forward(
- input,
- weight,
- offset,
- output,
- ctx.bufs_[0],
- ctx.bufs_[1],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- im2col_step=cur_im2col_step)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, offset, weight = ctx.saved_tensors
-
- grad_input = grad_offset = grad_weight = None
-
- cur_im2col_step = min(ctx.im2col_step, input.size(0))
- assert (input.size(0) % cur_im2col_step
- ) == 0, 'batch size must be divisible by im2col_step'
-
- grad_output = grad_output.contiguous()
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- ext_module.deform_conv_backward_input(
- input,
- offset,
- grad_output,
- grad_input,
- grad_offset,
- weight,
- ctx.bufs_[0],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- im2col_step=cur_im2col_step)
-
- if ctx.needs_input_grad[2]:
- grad_weight = torch.zeros_like(weight)
- ext_module.deform_conv_backward_parameters(
- input,
- offset,
- grad_output,
- grad_weight,
- ctx.bufs_[0],
- ctx.bufs_[1],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- scale=1,
- im2col_step=cur_im2col_step)
-
- return grad_input, grad_offset, grad_weight, \
- None, None, None, None, None, None, None
-
- @staticmethod
- def _output_size(ctx, input, weight):
- channels = weight.size(0)
- output_size = (input.size(0), channels)
- for d in range(input.dim() - 2):
- in_size = input.size(d + 2)
- pad = ctx.padding[d]
- kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1
- stride_ = ctx.stride[d]
- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
- if not all(map(lambda s: s > 0, output_size)):
- raise ValueError(
- 'convolution input is too small (output would be ' +
- 'x'.join(map(str, output_size)) + ')')
- return output_size
-
-
-deform_conv2d = DeformConv2dFunction.apply
-
-
-class DeformConv2d(nn.Module):
- r"""Deformable 2D convolution.
-
- Applies a deformable 2D convolution over an input signal composed of
- several input planes. DeformConv2d was described in the paper
- `Deformable Convolutional Networks
- `_
-
- Note:
- The argument ``im2col_step`` was added in version 1.3.17, which means
- number of samples processed by the ``im2col_cuda_kernel`` per call.
- It enables users to define ``batch_size`` and ``im2col_step`` more
- flexibly and solved `issue mmcv#1440
- `_.
-
- Args:
- in_channels (int): Number of channels in the input image.
- out_channels (int): Number of channels produced by the convolution.
- kernel_size(int, tuple): Size of the convolving kernel.
- stride(int, tuple): Stride of the convolution. Default: 1.
- padding (int or tuple): Zero-padding added to both sides of the input.
- Default: 0.
- dilation (int or tuple): Spacing between kernel elements. Default: 1.
- groups (int): Number of blocked connections from input.
- channels to output channels. Default: 1.
- deform_groups (int): Number of deformable group partitions.
- bias (bool): If True, adds a learnable bias to the output.
- Default: False.
- im2col_step (int): Number of samples processed by im2col_cuda_kernel
- per call. It will work when ``batch_size`` > ``im2col_step``, but
- ``batch_size`` must be divisible by ``im2col_step``. Default: 32.
- `New in version 1.3.17.`
- """
-
- @deprecated_api_warning({'deformable_groups': 'deform_groups'},
- cls_name='DeformConv2d')
- def __init__(self,
- in_channels: int,
- out_channels: int,
- kernel_size: Union[int, Tuple[int, ...]],
- stride: Union[int, Tuple[int, ...]] = 1,
- padding: Union[int, Tuple[int, ...]] = 0,
- dilation: Union[int, Tuple[int, ...]] = 1,
- groups: int = 1,
- deform_groups: int = 1,
- bias: bool = False,
- im2col_step: int = 32) -> None:
- super(DeformConv2d, self).__init__()
-
- assert not bias, \
- f'bias={bias} is not supported in DeformConv2d.'
- assert in_channels % groups == 0, \
- f'in_channels {in_channels} cannot be divisible by groups {groups}'
- assert out_channels % groups == 0, \
- f'out_channels {out_channels} cannot be divisible by groups \
- {groups}'
-
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.deform_groups = deform_groups
- self.im2col_step = im2col_step
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- # only weight, no bias
- self.weight = nn.Parameter(
- torch.Tensor(out_channels, in_channels // self.groups,
- *self.kernel_size))
-
- self.reset_parameters()
-
- def reset_parameters(self):
- # switch the initialization of `self.weight` to the standard kaiming
- # method described in `Delving deep into rectifiers: Surpassing
- # human-level performance on ImageNet classification` - He, K. et al.
- # (2015), using a uniform distribution
- nn.init.kaiming_uniform_(self.weight, nonlinearity='relu')
-
- def forward(self, x: Tensor, offset: Tensor) -> Tensor:
- """Deformable Convolutional forward function.
-
- Args:
- x (Tensor): Input feature, shape (B, C_in, H_in, W_in)
- offset (Tensor): Offset for deformable convolution, shape
- (B, deform_groups*kernel_size[0]*kernel_size[1]*2,
- H_out, W_out), H_out, W_out are equal to the output's.
-
- An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
- The spatial arrangement is like:
-
- .. code:: text
-
- (x0, y0) (x1, y1) (x2, y2)
- (x3, y3) (x4, y4) (x5, y5)
- (x6, y6) (x7, y7) (x8, y8)
-
- Returns:
- Tensor: Output of the layer.
- """
- # To fix an assert error in deform_conv_cuda.cpp:128
- # input image is smaller than kernel
- input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) <
- self.kernel_size[1])
- if input_pad:
- pad_h = max(self.kernel_size[0] - x.size(2), 0)
- pad_w = max(self.kernel_size[1] - x.size(3), 0)
- x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
- offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0)
- offset = offset.contiguous()
- out = deform_conv2d(x, offset, self.weight, self.stride, self.padding,
- self.dilation, self.groups, self.deform_groups,
- False, self.im2col_step)
- if input_pad:
- out = out[:, :, :out.size(2) - pad_h, :out.size(3) -
- pad_w].contiguous()
- return out
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(in_channels={self.in_channels},\n'
- s += f'out_channels={self.out_channels},\n'
- s += f'kernel_size={self.kernel_size},\n'
- s += f'stride={self.stride},\n'
- s += f'padding={self.padding},\n'
- s += f'dilation={self.dilation},\n'
- s += f'groups={self.groups},\n'
- s += f'deform_groups={self.deform_groups},\n'
- # bias is not supported in DeformConv2d.
- s += 'bias=False)'
- return s
-
-
-@CONV_LAYERS.register_module('DCN')
-class DeformConv2dPack(DeformConv2d):
- """A Deformable Conv Encapsulation that acts as normal Conv layers.
-
- The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
- The spatial arrangement is like:
-
- .. code:: text
-
- (x0, y0) (x1, y1) (x2, y2)
- (x3, y3) (x4, y4) (x5, y5)
- (x6, y6) (x7, y7) (x8, y8)
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int or tuple[int]): Same as nn.Conv2d.
- padding (int or tuple[int]): Same as nn.Conv2d.
- dilation (int or tuple[int]): Same as nn.Conv2d.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(DeformConv2dPack, self).__init__(*args, **kwargs)
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=_pair(self.stride),
- padding=_pair(self.padding),
- dilation=_pair(self.dilation),
- bias=True)
- self.init_offset()
-
- def init_offset(self):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- offset = self.conv_offset(x)
- return deform_conv2d(x, offset, self.weight, self.stride, self.padding,
- self.dilation, self.groups, self.deform_groups,
- False, self.im2col_step)
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- version = local_metadata.get('version', None)
-
- if version is None or version < 2:
- # the key is different in early versions
- # In version < 2, DeformConvPack loads previous benchmark models.
- if (prefix + 'conv_offset.weight' not in state_dict
- and prefix[:-1] + '_offset.weight' in state_dict):
- state_dict[prefix + 'conv_offset.weight'] = state_dict.pop(
- prefix[:-1] + '_offset.weight')
- if (prefix + 'conv_offset.bias' not in state_dict
- and prefix[:-1] + '_offset.bias' in state_dict):
- state_dict[prefix +
- 'conv_offset.bias'] = state_dict.pop(prefix[:-1] +
- '_offset.bias')
-
- if version is not None and version > 1:
- print_log(
- f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to '
- 'version 2.',
- logger='root')
-
- super()._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, missing_keys, unexpected_keys,
- error_msgs)
diff --git a/spaces/coreml-projects/transformers-to-coreml/Dockerfile b/spaces/coreml-projects/transformers-to-coreml/Dockerfile
deleted file mode 100644
index d6c893f48cc4fe6625ab90b7ecf3f5f6a455d58f..0000000000000000000000000000000000000000
--- a/spaces/coreml-projects/transformers-to-coreml/Dockerfile
+++ /dev/null
@@ -1,30 +0,0 @@
-FROM python:3.9.16
-ENV DEBIAN_FRONTEND=noninteractive \
- TZ=Europe/Paris
-
-# BEGIN root part
-
-# Setup tailscale
-WORKDIR /bin
-ENV TSFILE=tailscale_1.38.2_amd64.tgz
-RUN wget https://pkgs.tailscale.com/stable/${TSFILE} && \
- tar xzf ${TSFILE} --strip-components=1
-RUN mkdir -p /var/run && ln -s /tmp/tailscale /var/run/tailscale && \
- mkdir -p /var/cache && ln -s /tmp/tailscale /var/cache/tailscale && \
- mkdir -p /var/lib && ln -s /tmp/tailscale /var/lib/tailscale && \
- mkdir -p /var/task && ln -s /tmp/tailscale /var/task/tailscale
-
-# Install socat
-RUN apt-get update && apt-get -y install socat
-
-# User
-RUN useradd -m -u 1000 user
-USER user
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-WORKDIR /home/user/app
-
-COPY --link --chown=1000 ./ $HOME/app
-
-ENTRYPOINT $HOME/app/startup.sh
-
diff --git a/spaces/cybercorejapan/human-detection-docker/models/engine/visualizer.py b/spaces/cybercorejapan/human-detection-docker/models/engine/visualizer.py
deleted file mode 100644
index cdb6af3e2f76d6f239638662acae9d3b461d7959..0000000000000000000000000000000000000000
--- a/spaces/cybercorejapan/human-detection-docker/models/engine/visualizer.py
+++ /dev/null
@@ -1,143 +0,0 @@
-from abc import abstractmethod
-from typing import List, Optional
-import cv2
-import subprocess
-import numpy as np
-
-def putText(img, text: str, position,
- text_font: int=0, text_scale: int=1,
- bg_color=(255,255,255),
- text_color=(255,0,255),
- bg_thickness=8,
- text_thickness=1,
- lineType=cv2.LINE_AA):
- """ Function to put text on image.
-
- Args:
- img (_type_):
- text (str): _description_
- position (_type_): Top-left position of text.
- text_font (int, optional): font size of text. Defaults to 0.
- text_scale (int, optional): text scale. Defaults to 1.
- bg_color (tuple, optional): text background color. Defaults to (255,255,255).
- text_color (tuple, optional): text foreground color. Defaults to (255,0,255).
- bg_thickness (int, optional): text background thickness. Defaults to 8.
- text_thickness (int, optional): text foreground thickness. Defaults to 1.
- lineType (_type_, optional): line type. Defaults to cv2.LINE_AA.
-
- Returns:
- _type_: _description_
- """
- img = cv2.putText(img, text, position, text_font, text_scale, bg_color, thickness=bg_thickness, lineType=lineType)
- img = cv2.putText(img, text, position, text_font, text_scale, text_color, thickness=text_thickness, lineType=lineType)
- return img
-
-
-class BaseVisualizer():
-
- def __init__(self, class_names: Optional[List[str]], fps: int=-1, min_width: int=-1):
- """ Visualizer class for visualization (track_results + count_results).
-
- Args:
- class_map_ids (Dict): class mapping dictionary to map model's class to original class. Eg {0: 1, 1: 0, 2: 2, 3: 3} mean we swap class ID between 0 and 1.
- fps (int): FPS for output video. If fps = -1, it will have same fps as input video.
- min_width (int): minimum width for output video (height will be scaled to keep aspect ratio as input video). If min_width = -1, it will have same resolution as input video.
- """
-
- self.fps = fps
- self.min_width = min_width
- self.class_names = class_names
-
- def init_writer(self, input_video_info: List[int], output_path: str):
- """ Init video writer for write visualized frame to output video.
-
- Args:
- input_video_info (List[int]): It is a list that includes 4 elements of input video information (fps, width, height, num_frames).
- output_path (str): Path to save output video.
- """
- if (self.fps == -1):
- self.fps = input_video_info[0]
- self.width, self.height = input_video_info[1], input_video_info[2]
- if (self.min_width > 0):
- out_width = min(self.min_width, self.width)
- self.height = (self.height * out_width)//self.width
- self.width = out_width
- self.output_path = output_path
- self.writer = cv2.VideoWriter(self.output_path, cv2.VideoWriter_fourcc(*"mp4v"), int(self.fps), (self.width, self.height))
-
- @staticmethod
- def get_color(idx):
- idx = idx * 3
- color = ((37 * idx) % 255, (17 * idx) % 255, (29 * idx) % 255)
-
- return color
-
- @staticmethod
- def draw_dash_line(img,pt1,pt2,color,thickness=1,style='dotted',gap=20):
- dist =((pt1[0]-pt2[0])**2+(pt1[1]-pt2[1])**2)**.5
- pts= []
- for i in np.arange(0,dist,gap):
- r=i/dist
- x=int((pt1[0]*(1-r)+pt2[0]*r)+.5)
- y=int((pt1[1]*(1-r)+pt2[1]*r)+.5)
- p = (x,y)
- pts.append(p)
- if len(pts) ==0:
- return
- if style=='dotted':
- for p in pts:
- cv2.circle(img,p,thickness,color,-1)
- else:
- s=pts[0]
- e=pts[0]
- i=0
- for p in pts:
- s=e
- e=p
- if i%2==1:
- cv2.line(img,s,e,color,thickness)
- i+=1
-
- @staticmethod
- def draw_dash_poly(img,pts,color,thickness=1,style='dotted',gap=20):
- """ draw a polygon with dash line.
-
- Args:
- img (_type_): input image.
- pts (_type_): _description_
- color (_type_): _description_
- thickness (int, optional): _description_. Defaults to 1.
- style (str, optional): _description_. Defaults to 'dotted'.
- gap (int, optional): _description_. Defaults to 20.
-
- Returns:
- _type_: _description_
- """
- s=pts[0]
- e=pts[0]
- pts.append(pts.pop(0))
- for p in pts:
- s=e
- e=p
- BaseVisualizer.draw_dash_line(img,s,e,color,thickness,style,gap=gap)
- return img
-
- @staticmethod
- def draw_dash_rect(img,pt1,pt2,color,thickness=1,style='dotted',gap=10):
- pts = [pt1,(pt2[0],pt1[1]),pt2,(pt1[0],pt2[1])]
- return BaseVisualizer.draw_dash_poly(img,pts,color,thickness,style,gap=gap)
-
- def close(self):
- """ Function to release video writer. It should be called after finish visualization for all input frames.
- """
- self.writer.release()
-
- def convert(self):
- subprocess.run(f"ffmpeg -y -loglevel quiet -stats -i {self.output_path} -c:v libx264 {self.output_path}".split())
-
- @abstractmethod
- def visualize(self, *args,**kwargs):
- """ Each project should implement this function to visualize a frame.
-
- """
- raise NotImplementedError
\ No newline at end of file
diff --git a/spaces/dasanik2001/FYP_G15_RCCIIT/app.py b/spaces/dasanik2001/FYP_G15_RCCIIT/app.py
deleted file mode 100644
index fb1d34075cc6809b2f5e3f7d6ad4fe4a3421ab1f..0000000000000000000000000000000000000000
--- a/spaces/dasanik2001/FYP_G15_RCCIIT/app.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import gradio as gr
-import tensorflow
-import numpy as np
-from tensorflow.keras.preprocessing.text import Tokenizer
-from tensorflow.keras.preprocessing.sequence import pad_sequences
-import tensorflow.keras.backend as K
-import pickle
-
-# Load the tokenizer from file
-tokenizer_filename = 'tokenizer.pickle'
-with open(tokenizer_filename, 'rb') as f:
- tokenizer = pickle.load(f)
-
-print("Tokenizer loaded successfully!")
-
-
-
-# Specify the path to the H5 file
-h5_file_path = 'model_CPU_final.h5'
-
-# Load the model from the H5 file
-model = tensorflow.keras.models.load_model(h5_file_path)
-def get_key(value):
- dictionary={'positive':0,'negative':1,'neutral':2}
- for key,val in dictionary.items():
- if (val==value):
- return key
-def predict(Input):
- sentence_lst=[]
- sentence_lst.append(Input)
- sentence_seq=tokenizer.texts_to_sequences(sentence_lst)
- sentence_padded=pad_sequences(sentence_seq,maxlen=300,padding='post')
- print(model.predict(sentence_padded))
- ans=get_key(np.argmax(model.predict(sentence_padded), axis=1))
- # argmax_result = np.argmax(ans, axis=1)
- return "The emotion predicted is " + ans
- K.clear_session()
- # print("The emotion predicted is",ans)
-
-iface = gr.Interface(fn=predict, inputs="text", outputs="label")
-iface.launch()
-
diff --git a/spaces/dassum/Face-Id-Recognition/app.py b/spaces/dassum/Face-Id-Recognition/app.py
deleted file mode 100644
index 6bb51dbb5f735007d643ba03c28299fc4fa54bfb..0000000000000000000000000000000000000000
--- a/spaces/dassum/Face-Id-Recognition/app.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from sentence_transformers import util
-from transformers import pipeline
-from PIL import Image, ImageDraw
-from sentence_transformers import util,SentenceTransformer
-import gradio as gr
-checkpoint = "google/owlvit-base-patch32"
-detector = pipeline(model=checkpoint, task="zero-shot-object-detection")
-model = SentenceTransformer('clip-ViT-L-14')
-
-def get_face_image(im1):
- predictions = detector(
- im1,
- candidate_labels=["human face"],
- )
- max_score = 0
- box_area = None
- for prediction in predictions:
- box = prediction["box"]
- label = prediction["label"]
- score = prediction["score"]
- if score > max_score :
- xmin, ymin, xmax, ymax = box.values()
- box_area = (xmin, ymin, xmax, ymax)
- max_score = score
- else:
- continue
- draw = ImageDraw.Draw(im1)
- draw.rectangle(box_area, outline="red", width=1)
- #draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="blue")
- crop_img1 = im1.crop(box_area)
- #display(crop_img1)
- newsize = (256, 256)
- face_img1 = crop_img1.resize(newsize)
- #display(face_img1)
- return face_img1
-
-def predict(im1, im2,inp_sim):
- face_image1 = get_face_image(im1)
- face_image2 = get_face_image(im2)
-
- img_emb = model.encode([face_image1, face_image2])
- sim = util.cos_sim(img_emb[0], img_emb[1])
- if sim > inp_sim:
- return sim, "SAME PERSON, UNLOCK PHONE"
- else:
- return sim, "DIFFERENT PEOPLE, DON'T UNLOCK"
-
-
-description = "An application that can recognize if two faces belong to the same person or not"
-title = "Facial Identity Recognition System"
-
-interface = gr.Interface(fn=predict,
- inputs= [gr.Image(type="pil", source="webcam"),
- gr.Image(type="pil"),
- gr.Slider(0, 1, value=0.8, label="Similarity Percentage", info="Choose betwen 0 and 1")],
- outputs= [gr.Number(label="Similarity"),
- gr.Textbox(label="Message")]
- )
-
-interface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/dbredvick/whisper-webui/src/whisperContainer.py b/spaces/dbredvick/whisper-webui/src/whisperContainer.py
deleted file mode 100644
index c997433c3f422771107f9fcbca2cb18e2f0ad3d6..0000000000000000000000000000000000000000
--- a/spaces/dbredvick/whisper-webui/src/whisperContainer.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# External programs
-import whisper
-
-class WhisperModelCache:
- def __init__(self):
- self._cache = dict()
-
- def get(self, model_name, device: str = None):
- key = model_name + ":" + (device if device else '')
-
- result = self._cache.get(key)
-
- if result is None:
- print("Loading whisper model " + model_name)
- result = whisper.load_model(name=model_name, device=device)
- self._cache[key] = result
- return result
-
- def clear(self):
- self._cache.clear()
-
-# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times.
-GLOBAL_WHISPER_MODEL_CACHE = WhisperModelCache()
-
-class WhisperContainer:
- def __init__(self, model_name: str, device: str = None, download_root: str = None, cache: WhisperModelCache = None):
- self.model_name = model_name
- self.device = device
- self.download_root = download_root
- self.cache = cache
-
- # Will be created on demand
- self.model = None
-
- def get_model(self):
- if self.model is None:
-
- if (self.cache is None):
- print("Loading whisper model " + self.model_name)
- self.model = whisper.load_model(self.model_name, device=self.device, download_root=self.download_root)
- else:
- self.model = self.cache.get(self.model_name, device=self.device)
- return self.model
-
- def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict):
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- initial_prompt: str
- The initial prompt to use for the transcription.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- return WhisperCallback(self, language=language, task=task, initial_prompt=initial_prompt, **decodeOptions)
-
- # This is required for multiprocessing
- def __getstate__(self):
- return { "model_name": self.model_name, "device": self.device }
-
- def __setstate__(self, state):
- self.model_name = state["model_name"]
- self.device = state["device"]
- self.model = None
- # Depickled objects must use the global cache
- self.cache = GLOBAL_WHISPER_MODEL_CACHE
-
-
-class WhisperCallback:
- def __init__(self, model_container: WhisperContainer, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict):
- self.model_container = model_container
- self.language = language
- self.task = task
- self.initial_prompt = initial_prompt
- self.decodeOptions = decodeOptions
-
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- prompt: str
- The prompt to use for the transcription.
- detected_language: str
- The detected language of the audio file.
-
- Returns
- -------
- The result of the Whisper call.
- """
- model = self.model_container.get_model()
-
- return model.transcribe(audio, \
- language=self.language if self.language else detected_language, task=self.task, \
- initial_prompt=self._concat_prompt(self.initial_prompt, prompt) if segment_index == 0 else prompt, \
- **self.decodeOptions)
-
- def _concat_prompt(self, prompt1, prompt2):
- if (prompt1 is None):
- return prompt2
- elif (prompt2 is None):
- return prompt1
- else:
- return prompt1 + " " + prompt2
\ No newline at end of file
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/designspaceLib/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/designspaceLib/__init__.py
deleted file mode 100644
index 1c71fd002e8afcf4432db0e62b864c78b659d1fc..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/designspaceLib/__init__.py
+++ /dev/null
@@ -1,3283 +0,0 @@
-from __future__ import annotations
-
-import collections
-import copy
-import itertools
-import math
-import os
-import posixpath
-from io import BytesIO, StringIO
-from textwrap import indent
-from typing import Any, Dict, List, MutableMapping, Optional, Tuple, Union, cast
-
-from fontTools.misc import etree as ET
-from fontTools.misc import plistlib
-from fontTools.misc.loggingTools import LogMixin
-from fontTools.misc.textTools import tobytes, tostr
-
-"""
- designSpaceDocument
-
- - read and write designspace files
-"""
-
-__all__ = [
- "AxisDescriptor",
- "AxisLabelDescriptor",
- "AxisMappingDescriptor",
- "BaseDocReader",
- "BaseDocWriter",
- "DesignSpaceDocument",
- "DesignSpaceDocumentError",
- "DiscreteAxisDescriptor",
- "InstanceDescriptor",
- "LocationLabelDescriptor",
- "RangeAxisSubsetDescriptor",
- "RuleDescriptor",
- "SourceDescriptor",
- "ValueAxisSubsetDescriptor",
- "VariableFontDescriptor",
-]
-
-# ElementTree allows to find namespace-prefixed elements, but not attributes
-# so we have to do it ourselves for 'xml:lang'
-XML_NS = "{http://www.w3.org/XML/1998/namespace}"
-XML_LANG = XML_NS + "lang"
-
-
-def posix(path):
- """Normalize paths using forward slash to work also on Windows."""
- new_path = posixpath.join(*path.split(os.path.sep))
- if path.startswith("/"):
- # The above transformation loses absolute paths
- new_path = "/" + new_path
- elif path.startswith(r"\\"):
- # The above transformation loses leading slashes of UNC path mounts
- new_path = "//" + new_path
- return new_path
-
-
-def posixpath_property(private_name):
- """Generate a propery that holds a path always using forward slashes."""
-
- def getter(self):
- # Normal getter
- return getattr(self, private_name)
-
- def setter(self, value):
- # The setter rewrites paths using forward slashes
- if value is not None:
- value = posix(value)
- setattr(self, private_name, value)
-
- return property(getter, setter)
-
-
-class DesignSpaceDocumentError(Exception):
- def __init__(self, msg, obj=None):
- self.msg = msg
- self.obj = obj
-
- def __str__(self):
- return str(self.msg) + (": %r" % self.obj if self.obj is not None else "")
-
-
-class AsDictMixin(object):
- def asdict(self):
- d = {}
- for attr, value in self.__dict__.items():
- if attr.startswith("_"):
- continue
- if hasattr(value, "asdict"):
- value = value.asdict()
- elif isinstance(value, list):
- value = [v.asdict() if hasattr(v, "asdict") else v for v in value]
- d[attr] = value
- return d
-
-
-class SimpleDescriptor(AsDictMixin):
- """Containers for a bunch of attributes"""
-
- # XXX this is ugly. The 'print' is inappropriate here, and instead of
- # assert, it should simply return True/False
- def compare(self, other):
- # test if this object contains the same data as the other
- for attr in self._attrs:
- try:
- assert getattr(self, attr) == getattr(other, attr)
- except AssertionError:
- print(
- "failed attribute",
- attr,
- getattr(self, attr),
- "!=",
- getattr(other, attr),
- )
-
- def __repr__(self):
- attrs = [f"{a}={repr(getattr(self, a))}," for a in self._attrs]
- attrs = indent("\n".join(attrs), " ")
- return f"{self.__class__.__name__}(\n{attrs}\n)"
-
-
-class SourceDescriptor(SimpleDescriptor):
- """Simple container for data related to the source
-
- .. code:: python
-
- doc = DesignSpaceDocument()
- s1 = SourceDescriptor()
- s1.path = masterPath1
- s1.name = "master.ufo1"
- s1.font = defcon.Font("master.ufo1")
- s1.location = dict(weight=0)
- s1.familyName = "MasterFamilyName"
- s1.styleName = "MasterStyleNameOne"
- s1.localisedFamilyName = dict(fr="Caractère")
- s1.mutedGlyphNames.append("A")
- s1.mutedGlyphNames.append("Z")
- doc.addSource(s1)
-
- """
-
- flavor = "source"
- _attrs = [
- "filename",
- "path",
- "name",
- "layerName",
- "location",
- "copyLib",
- "copyGroups",
- "copyFeatures",
- "muteKerning",
- "muteInfo",
- "mutedGlyphNames",
- "familyName",
- "styleName",
- "localisedFamilyName",
- ]
-
- filename = posixpath_property("_filename")
- path = posixpath_property("_path")
-
- def __init__(
- self,
- *,
- filename=None,
- path=None,
- font=None,
- name=None,
- location=None,
- designLocation=None,
- layerName=None,
- familyName=None,
- styleName=None,
- localisedFamilyName=None,
- copyLib=False,
- copyInfo=False,
- copyGroups=False,
- copyFeatures=False,
- muteKerning=False,
- muteInfo=False,
- mutedGlyphNames=None,
- ):
- self.filename = filename
- """string. A relative path to the source file, **as it is in the document**.
-
- MutatorMath + VarLib.
- """
- self.path = path
- """The absolute path, calculated from filename."""
-
- self.font = font
- """Any Python object. Optional. Points to a representation of this
- source font that is loaded in memory, as a Python object (e.g. a
- ``defcon.Font`` or a ``fontTools.ttFont.TTFont``).
-
- The default document reader will not fill-in this attribute, and the
- default writer will not use this attribute. It is up to the user of
- ``designspaceLib`` to either load the resource identified by
- ``filename`` and store it in this field, or write the contents of
- this field to the disk and make ```filename`` point to that.
- """
-
- self.name = name
- """string. Optional. Unique identifier name for this source.
-
- MutatorMath + varLib.
- """
-
- self.designLocation = (
- designLocation if designLocation is not None else location or {}
- )
- """dict. Axis values for this source, in design space coordinates.
-
- MutatorMath + varLib.
-
- This may be only part of the full design location.
- See :meth:`getFullDesignLocation()`
-
- .. versionadded:: 5.0
- """
-
- self.layerName = layerName
- """string. The name of the layer in the source to look for
- outline data. Default ``None`` which means ``foreground``.
- """
- self.familyName = familyName
- """string. Family name of this source. Though this data
- can be extracted from the font, it can be efficient to have it right
- here.
-
- varLib.
- """
- self.styleName = styleName
- """string. Style name of this source. Though this data
- can be extracted from the font, it can be efficient to have it right
- here.
-
- varLib.
- """
- self.localisedFamilyName = localisedFamilyName or {}
- """dict. A dictionary of localised family name strings, keyed by
- language code.
-
- If present, will be used to build localized names for all instances.
-
- .. versionadded:: 5.0
- """
-
- self.copyLib = copyLib
- """bool. Indicates if the contents of the font.lib need to
- be copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyInfo = copyInfo
- """bool. Indicates if the non-interpolating font.info needs
- to be copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyGroups = copyGroups
- """bool. Indicates if the groups need to be copied to the
- instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyFeatures = copyFeatures
- """bool. Indicates if the feature text needs to be
- copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.muteKerning = muteKerning
- """bool. Indicates if the kerning data from this source
- needs to be muted (i.e. not be part of the calculations).
-
- MutatorMath only.
- """
- self.muteInfo = muteInfo
- """bool. Indicated if the interpolating font.info data for
- this source needs to be muted.
-
- MutatorMath only.
- """
- self.mutedGlyphNames = mutedGlyphNames or []
- """list. Glyphnames that need to be muted in the
- instances.
-
- MutatorMath only.
- """
-
- @property
- def location(self):
- """dict. Axis values for this source, in design space coordinates.
-
- MutatorMath + varLib.
-
- .. deprecated:: 5.0
- Use the more explicit alias for this property :attr:`designLocation`.
- """
- return self.designLocation
-
- @location.setter
- def location(self, location: Optional[AnisotropicLocationDict]):
- self.designLocation = location or {}
-
- def setFamilyName(self, familyName, languageCode="en"):
- """Setter for :attr:`localisedFamilyName`
-
- .. versionadded:: 5.0
- """
- self.localisedFamilyName[languageCode] = tostr(familyName)
-
- def getFamilyName(self, languageCode="en"):
- """Getter for :attr:`localisedFamilyName`
-
- .. versionadded:: 5.0
- """
- return self.localisedFamilyName.get(languageCode)
-
- def getFullDesignLocation(
- self, doc: "DesignSpaceDocument"
- ) -> AnisotropicLocationDict:
- """Get the complete design location of this source, from its
- :attr:`designLocation` and the document's axis defaults.
-
- .. versionadded:: 5.0
- """
- result: AnisotropicLocationDict = {}
- for axis in doc.axes:
- if axis.name in self.designLocation:
- result[axis.name] = self.designLocation[axis.name]
- else:
- result[axis.name] = axis.map_forward(axis.default)
- return result
-
-
-class RuleDescriptor(SimpleDescriptor):
- """Represents the rule descriptor element: a set of glyph substitutions to
- trigger conditionally in some parts of the designspace.
-
- .. code:: python
-
- r1 = RuleDescriptor()
- r1.name = "unique.rule.name"
- r1.conditionSets.append([dict(name="weight", minimum=-10, maximum=10), dict(...)])
- r1.conditionSets.append([dict(...), dict(...)])
- r1.subs.append(("a", "a.alt"))
-
- .. code:: xml
-
-
-
-
-
-
-
-
-
-
-
-
-
- """
-
- _attrs = ["name", "conditionSets", "subs"] # what do we need here
-
- def __init__(self, *, name=None, conditionSets=None, subs=None):
- self.name = name
- """string. Unique name for this rule. Can be used to reference this rule data."""
- # list of lists of dict(name='aaaa', minimum=0, maximum=1000)
- self.conditionSets = conditionSets or []
- """a list of conditionsets.
-
- - Each conditionset is a list of conditions.
- - Each condition is a dict with ``name``, ``minimum`` and ``maximum`` keys.
- """
- # list of substitutions stored as tuples of glyphnames ("a", "a.alt")
- self.subs = subs or []
- """list of substitutions.
-
- - Each substitution is stored as tuples of glyphnames, e.g. ("a", "a.alt").
- - Note: By default, rules are applied first, before other text
- shaping/OpenType layout, as they are part of the
- `Required Variation Alternates OpenType feature `_.
- See ref:`rules-element` § Attributes.
- """
-
-
-def evaluateRule(rule, location):
- """Return True if any of the rule's conditionsets matches the given location."""
- return any(evaluateConditions(c, location) for c in rule.conditionSets)
-
-
-def evaluateConditions(conditions, location):
- """Return True if all the conditions matches the given location.
-
- - If a condition has no minimum, check for < maximum.
- - If a condition has no maximum, check for > minimum.
- """
- for cd in conditions:
- value = location[cd["name"]]
- if cd.get("minimum") is None:
- if value > cd["maximum"]:
- return False
- elif cd.get("maximum") is None:
- if cd["minimum"] > value:
- return False
- elif not cd["minimum"] <= value <= cd["maximum"]:
- return False
- return True
-
-
-def processRules(rules, location, glyphNames):
- """Apply these rules at this location to these glyphnames.
-
- Return a new list of glyphNames with substitutions applied.
-
- - rule order matters
- """
- newNames = []
- for rule in rules:
- if evaluateRule(rule, location):
- for name in glyphNames:
- swap = False
- for a, b in rule.subs:
- if name == a:
- swap = True
- break
- if swap:
- newNames.append(b)
- else:
- newNames.append(name)
- glyphNames = newNames
- newNames = []
- return glyphNames
-
-
-AnisotropicLocationDict = Dict[str, Union[float, Tuple[float, float]]]
-SimpleLocationDict = Dict[str, float]
-
-
-class AxisMappingDescriptor(SimpleDescriptor):
- """Represents the axis mapping element: mapping an input location
- to an output location in the designspace.
-
- .. code:: python
-
- m1 = AxisMappingDescriptor()
- m1.inputLocation = {"weight": 900, "width": 150}
- m1.outputLocation = {"weight": 870}
-
- .. code:: xml
-
-
-
-
-
-
-
-
-
-
- """
-
- _attrs = ["inputLocation", "outputLocation"]
-
- def __init__(self, *, inputLocation=None, outputLocation=None):
- self.inputLocation: SimpleLocationDict = inputLocation or {}
- """dict. Axis values for the input of the mapping, in design space coordinates.
-
- varLib.
-
- .. versionadded:: 5.1
- """
- self.outputLocation: SimpleLocationDict = outputLocation or {}
- """dict. Axis values for the output of the mapping, in design space coordinates.
-
- varLib.
-
- .. versionadded:: 5.1
- """
-
-
-class InstanceDescriptor(SimpleDescriptor):
- """Simple container for data related to the instance
-
-
- .. code:: python
-
- i2 = InstanceDescriptor()
- i2.path = instancePath2
- i2.familyName = "InstanceFamilyName"
- i2.styleName = "InstanceStyleName"
- i2.name = "instance.ufo2"
- # anisotropic location
- i2.designLocation = dict(weight=500, width=(400,300))
- i2.postScriptFontName = "InstancePostscriptName"
- i2.styleMapFamilyName = "InstanceStyleMapFamilyName"
- i2.styleMapStyleName = "InstanceStyleMapStyleName"
- i2.lib['com.coolDesignspaceApp.specimenText'] = 'Hamburgerwhatever'
- doc.addInstance(i2)
- """
-
- flavor = "instance"
- _defaultLanguageCode = "en"
- _attrs = [
- "filename",
- "path",
- "name",
- "locationLabel",
- "designLocation",
- "userLocation",
- "familyName",
- "styleName",
- "postScriptFontName",
- "styleMapFamilyName",
- "styleMapStyleName",
- "localisedFamilyName",
- "localisedStyleName",
- "localisedStyleMapFamilyName",
- "localisedStyleMapStyleName",
- "glyphs",
- "kerning",
- "info",
- "lib",
- ]
-
- filename = posixpath_property("_filename")
- path = posixpath_property("_path")
-
- def __init__(
- self,
- *,
- filename=None,
- path=None,
- font=None,
- name=None,
- location=None,
- locationLabel=None,
- designLocation=None,
- userLocation=None,
- familyName=None,
- styleName=None,
- postScriptFontName=None,
- styleMapFamilyName=None,
- styleMapStyleName=None,
- localisedFamilyName=None,
- localisedStyleName=None,
- localisedStyleMapFamilyName=None,
- localisedStyleMapStyleName=None,
- glyphs=None,
- kerning=True,
- info=True,
- lib=None,
- ):
- self.filename = filename
- """string. Relative path to the instance file, **as it is
- in the document**. The file may or may not exist.
-
- MutatorMath + VarLib.
- """
- self.path = path
- """string. Absolute path to the instance file, calculated from
- the document path and the string in the filename attr. The file may
- or may not exist.
-
- MutatorMath.
- """
- self.font = font
- """Same as :attr:`SourceDescriptor.font`
-
- .. seealso:: :attr:`SourceDescriptor.font`
- """
- self.name = name
- """string. Unique identifier name of the instance, used to
- identify it if it needs to be referenced from elsewhere in the
- document.
- """
- self.locationLabel = locationLabel
- """Name of a :class:`LocationLabelDescriptor`. If
- provided, the instance should have the same location as the
- LocationLabel.
-
- .. seealso::
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.designLocation: AnisotropicLocationDict = (
- designLocation if designLocation is not None else (location or {})
- )
- """dict. Axis values for this instance, in design space coordinates.
-
- MutatorMath + varLib.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.userLocation: SimpleLocationDict = userLocation or {}
- """dict. Axis values for this instance, in user space coordinates.
-
- MutatorMath + varLib.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.familyName = familyName
- """string. Family name of this instance.
-
- MutatorMath + varLib.
- """
- self.styleName = styleName
- """string. Style name of this instance.
-
- MutatorMath + varLib.
- """
- self.postScriptFontName = postScriptFontName
- """string. Postscript fontname for this instance.
-
- MutatorMath + varLib.
- """
- self.styleMapFamilyName = styleMapFamilyName
- """string. StyleMap familyname for this instance.
-
- MutatorMath + varLib.
- """
- self.styleMapStyleName = styleMapStyleName
- """string. StyleMap stylename for this instance.
-
- MutatorMath + varLib.
- """
- self.localisedFamilyName = localisedFamilyName or {}
- """dict. A dictionary of localised family name
- strings, keyed by language code.
- """
- self.localisedStyleName = localisedStyleName or {}
- """dict. A dictionary of localised stylename
- strings, keyed by language code.
- """
- self.localisedStyleMapFamilyName = localisedStyleMapFamilyName or {}
- """A dictionary of localised style map
- familyname strings, keyed by language code.
- """
- self.localisedStyleMapStyleName = localisedStyleMapStyleName or {}
- """A dictionary of localised style map
- stylename strings, keyed by language code.
- """
- self.glyphs = glyphs or {}
- """dict for special master definitions for glyphs. If glyphs
- need special masters (to record the results of executed rules for
- example).
-
- MutatorMath.
-
- .. deprecated:: 5.0
- Use rules or sparse sources instead.
- """
- self.kerning = kerning
- """ bool. Indicates if this instance needs its kerning
- calculated.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.info = info
- """bool. Indicated if this instance needs the interpolating
- font.info calculated.
-
- .. deprecated:: 5.0
- """
-
- self.lib = lib or {}
- """Custom data associated with this instance."""
-
- @property
- def location(self):
- """dict. Axis values for this instance.
-
- MutatorMath + varLib.
-
- .. deprecated:: 5.0
- Use the more explicit alias for this property :attr:`designLocation`.
- """
- return self.designLocation
-
- @location.setter
- def location(self, location: Optional[AnisotropicLocationDict]):
- self.designLocation = location or {}
-
- def setStyleName(self, styleName, languageCode="en"):
- """These methods give easier access to the localised names."""
- self.localisedStyleName[languageCode] = tostr(styleName)
-
- def getStyleName(self, languageCode="en"):
- return self.localisedStyleName.get(languageCode)
-
- def setFamilyName(self, familyName, languageCode="en"):
- self.localisedFamilyName[languageCode] = tostr(familyName)
-
- def getFamilyName(self, languageCode="en"):
- return self.localisedFamilyName.get(languageCode)
-
- def setStyleMapStyleName(self, styleMapStyleName, languageCode="en"):
- self.localisedStyleMapStyleName[languageCode] = tostr(styleMapStyleName)
-
- def getStyleMapStyleName(self, languageCode="en"):
- return self.localisedStyleMapStyleName.get(languageCode)
-
- def setStyleMapFamilyName(self, styleMapFamilyName, languageCode="en"):
- self.localisedStyleMapFamilyName[languageCode] = tostr(styleMapFamilyName)
-
- def getStyleMapFamilyName(self, languageCode="en"):
- return self.localisedStyleMapFamilyName.get(languageCode)
-
- def clearLocation(self, axisName: Optional[str] = None):
- """Clear all location-related fields. Ensures that
- :attr:``designLocation`` and :attr:``userLocation`` are dictionaries
- (possibly empty if clearing everything).
-
- In order to update the location of this instance wholesale, a user
- should first clear all the fields, then change the field(s) for which
- they have data.
-
- .. code:: python
-
- instance.clearLocation()
- instance.designLocation = {'Weight': (34, 36.5), 'Width': 100}
- instance.userLocation = {'Opsz': 16}
-
- In order to update a single axis location, the user should only clear
- that axis, then edit the values:
-
- .. code:: python
-
- instance.clearLocation('Weight')
- instance.designLocation['Weight'] = (34, 36.5)
-
- Args:
- axisName: if provided, only clear the location for that axis.
-
- .. versionadded:: 5.0
- """
- self.locationLabel = None
- if axisName is None:
- self.designLocation = {}
- self.userLocation = {}
- else:
- if self.designLocation is None:
- self.designLocation = {}
- if axisName in self.designLocation:
- del self.designLocation[axisName]
- if self.userLocation is None:
- self.userLocation = {}
- if axisName in self.userLocation:
- del self.userLocation[axisName]
-
- def getLocationLabelDescriptor(
- self, doc: "DesignSpaceDocument"
- ) -> Optional[LocationLabelDescriptor]:
- """Get the :class:`LocationLabelDescriptor` instance that matches
- this instances's :attr:`locationLabel`.
-
- Raises if the named label can't be found.
-
- .. versionadded:: 5.0
- """
- if self.locationLabel is None:
- return None
- label = doc.getLocationLabel(self.locationLabel)
- if label is None:
- raise DesignSpaceDocumentError(
- "InstanceDescriptor.getLocationLabelDescriptor(): "
- f"unknown location label `{self.locationLabel}` in instance `{self.name}`."
- )
- return label
-
- def getFullDesignLocation(
- self, doc: "DesignSpaceDocument"
- ) -> AnisotropicLocationDict:
- """Get the complete design location of this instance, by combining data
- from the various location fields, default axis values and mappings, and
- top-level location labels.
-
- The source of truth for this instance's location is determined for each
- axis independently by taking the first not-None field in this list:
-
- - ``locationLabel``: the location along this axis is the same as the
- matching STAT format 4 label. No anisotropy.
- - ``designLocation[axisName]``: the explicit design location along this
- axis, possibly anisotropic.
- - ``userLocation[axisName]``: the explicit user location along this
- axis. No anisotropy.
- - ``axis.default``: default axis value. No anisotropy.
-
- .. versionadded:: 5.0
- """
- label = self.getLocationLabelDescriptor(doc)
- if label is not None:
- return doc.map_forward(label.userLocation) # type: ignore
- result: AnisotropicLocationDict = {}
- for axis in doc.axes:
- if axis.name in self.designLocation:
- result[axis.name] = self.designLocation[axis.name]
- elif axis.name in self.userLocation:
- result[axis.name] = axis.map_forward(self.userLocation[axis.name])
- else:
- result[axis.name] = axis.map_forward(axis.default)
- return result
-
- def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict:
- """Get the complete user location for this instance.
-
- .. seealso:: :meth:`getFullDesignLocation`
-
- .. versionadded:: 5.0
- """
- return doc.map_backward(self.getFullDesignLocation(doc))
-
-
-def tagForAxisName(name):
- # try to find or make a tag name for this axis name
- names = {
- "weight": ("wght", dict(en="Weight")),
- "width": ("wdth", dict(en="Width")),
- "optical": ("opsz", dict(en="Optical Size")),
- "slant": ("slnt", dict(en="Slant")),
- "italic": ("ital", dict(en="Italic")),
- }
- if name.lower() in names:
- return names[name.lower()]
- if len(name) < 4:
- tag = name + "*" * (4 - len(name))
- else:
- tag = name[:4]
- return tag, dict(en=name)
-
-
-class AbstractAxisDescriptor(SimpleDescriptor):
- flavor = "axis"
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- # opentype tag for this axis
- self.tag = tag
- """string. Four letter tag for this axis. Some might be
- registered at the `OpenType
- specification `__.
- Privately-defined axis tags must begin with an uppercase letter and
- use only uppercase letters or digits.
- """
- # name of the axis used in locations
- self.name = name
- """string. Name of the axis as it is used in the location dicts.
-
- MutatorMath + varLib.
- """
- # names for UI purposes, if this is not a standard axis,
- self.labelNames = labelNames or {}
- """dict. When defining a non-registered axis, it will be
- necessary to define user-facing readable names for the axis. Keyed by
- xml:lang code. Values are required to be ``unicode`` strings, even if
- they only contain ASCII characters.
- """
- self.hidden = hidden
- """bool. Whether this axis should be hidden in user interfaces.
- """
- self.map = map or []
- """list of input / output values that can describe a warp of user space
- to design space coordinates. If no map values are present, it is assumed
- user space is the same as design space, as in [(minimum, minimum),
- (maximum, maximum)].
-
- varLib.
- """
- self.axisOrdering = axisOrdering
- """STAT table field ``axisOrdering``.
-
- See: `OTSpec STAT Axis Record `_
-
- .. versionadded:: 5.0
- """
- self.axisLabels: List[AxisLabelDescriptor] = axisLabels or []
- """STAT table entries for Axis Value Tables format 1, 2, 3.
-
- See: `OTSpec STAT Axis Value Tables `_
-
- .. versionadded:: 5.0
- """
-
-
-class AxisDescriptor(AbstractAxisDescriptor):
- """Simple container for the axis data.
-
- Add more localisations?
-
- .. code:: python
-
- a1 = AxisDescriptor()
- a1.minimum = 1
- a1.maximum = 1000
- a1.default = 400
- a1.name = "weight"
- a1.tag = "wght"
- a1.labelNames['fa-IR'] = "قطر"
- a1.labelNames['en'] = "Wéíght"
- a1.map = [(1.0, 10.0), (400.0, 66.0), (1000.0, 990.0)]
- a1.axisOrdering = 1
- a1.axisLabels = [
- AxisLabelDescriptor(name="Regular", userValue=400, elidable=True)
- ]
- doc.addAxis(a1)
- """
-
- _attrs = [
- "tag",
- "name",
- "maximum",
- "minimum",
- "default",
- "map",
- "axisOrdering",
- "axisLabels",
- ]
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- minimum=None,
- default=None,
- maximum=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- super().__init__(
- tag=tag,
- name=name,
- labelNames=labelNames,
- hidden=hidden,
- map=map,
- axisOrdering=axisOrdering,
- axisLabels=axisLabels,
- )
- self.minimum = minimum
- """number. The minimum value for this axis in user space.
-
- MutatorMath + varLib.
- """
- self.maximum = maximum
- """number. The maximum value for this axis in user space.
-
- MutatorMath + varLib.
- """
- self.default = default
- """number. The default value for this axis, i.e. when a new location is
- created, this is the value this axis will get in user space.
-
- MutatorMath + varLib.
- """
-
- def serialize(self):
- # output to a dict, used in testing
- return dict(
- tag=self.tag,
- name=self.name,
- labelNames=self.labelNames,
- maximum=self.maximum,
- minimum=self.minimum,
- default=self.default,
- hidden=self.hidden,
- map=self.map,
- axisOrdering=self.axisOrdering,
- axisLabels=self.axisLabels,
- )
-
- def map_forward(self, v):
- """Maps value from axis mapping's input (user) to output (design)."""
- from fontTools.varLib.models import piecewiseLinearMap
-
- if not self.map:
- return v
- return piecewiseLinearMap(v, {k: v for k, v in self.map})
-
- def map_backward(self, v):
- """Maps value from axis mapping's output (design) to input (user)."""
- from fontTools.varLib.models import piecewiseLinearMap
-
- if isinstance(v, tuple):
- v = v[0]
- if not self.map:
- return v
- return piecewiseLinearMap(v, {v: k for k, v in self.map})
-
-
-class DiscreteAxisDescriptor(AbstractAxisDescriptor):
- """Container for discrete axis data.
-
- Use this for axes that do not interpolate. The main difference from a
- continuous axis is that a continuous axis has a ``minimum`` and ``maximum``,
- while a discrete axis has a list of ``values``.
-
- Example: an Italic axis with 2 stops, Roman and Italic, that are not
- compatible. The axis still allows to bind together the full font family,
- which is useful for the STAT table, however it can't become a variation
- axis in a VF.
-
- .. code:: python
-
- a2 = DiscreteAxisDescriptor()
- a2.values = [0, 1]
- a2.default = 0
- a2.name = "Italic"
- a2.tag = "ITAL"
- a2.labelNames['fr'] = "Italique"
- a2.map = [(0, 0), (1, -11)]
- a2.axisOrdering = 2
- a2.axisLabels = [
- AxisLabelDescriptor(name="Roman", userValue=0, elidable=True)
- ]
- doc.addAxis(a2)
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis"
- _attrs = ("tag", "name", "values", "default", "map", "axisOrdering", "axisLabels")
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- values=None,
- default=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- super().__init__(
- tag=tag,
- name=name,
- labelNames=labelNames,
- hidden=hidden,
- map=map,
- axisOrdering=axisOrdering,
- axisLabels=axisLabels,
- )
- self.default: float = default
- """The default value for this axis, i.e. when a new location is
- created, this is the value this axis will get in user space.
-
- However, this default value is less important than in continuous axes:
-
- - it doesn't define the "neutral" version of outlines from which
- deltas would apply, as this axis does not interpolate.
- - it doesn't provide the reference glyph set for the designspace, as
- fonts at each value can have different glyph sets.
- """
- self.values: List[float] = values or []
- """List of possible values for this axis. Contrary to continuous axes,
- only the values in this list can be taken by the axis, nothing in-between.
- """
-
- def map_forward(self, value):
- """Maps value from axis mapping's input to output.
-
- Returns value unchanged if no mapping entry is found.
-
- Note: for discrete axes, each value must have its mapping entry, if
- you intend that value to be mapped.
- """
- return next((v for k, v in self.map if k == value), value)
-
- def map_backward(self, value):
- """Maps value from axis mapping's output to input.
-
- Returns value unchanged if no mapping entry is found.
-
- Note: for discrete axes, each value must have its mapping entry, if
- you intend that value to be mapped.
- """
- if isinstance(value, tuple):
- value = value[0]
- return next((k for k, v in self.map if v == value), value)
-
-
-class AxisLabelDescriptor(SimpleDescriptor):
- """Container for axis label data.
-
- Analogue of OpenType's STAT data for a single axis (formats 1, 2 and 3).
- All values are user values.
- See: `OTSpec STAT Axis value table, format 1, 2, 3 `_
-
- The STAT format of the Axis value depends on which field are filled-in,
- see :meth:`getFormat`
-
- .. versionadded:: 5.0
- """
-
- flavor = "label"
- _attrs = (
- "userMinimum",
- "userValue",
- "userMaximum",
- "name",
- "elidable",
- "olderSibling",
- "linkedUserValue",
- "labelNames",
- )
-
- def __init__(
- self,
- *,
- name,
- userValue,
- userMinimum=None,
- userMaximum=None,
- elidable=False,
- olderSibling=False,
- linkedUserValue=None,
- labelNames=None,
- ):
- self.userMinimum: Optional[float] = userMinimum
- """STAT field ``rangeMinValue`` (format 2)."""
- self.userValue: float = userValue
- """STAT field ``value`` (format 1, 3) or ``nominalValue`` (format 2)."""
- self.userMaximum: Optional[float] = userMaximum
- """STAT field ``rangeMaxValue`` (format 2)."""
- self.name: str = name
- """Label for this axis location, STAT field ``valueNameID``."""
- self.elidable: bool = elidable
- """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``.
-
- See: `OTSpec STAT Flags `_
- """
- self.olderSibling: bool = olderSibling
- """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``.
-
- See: `OTSpec STAT Flags `_
- """
- self.linkedUserValue: Optional[float] = linkedUserValue
- """STAT field ``linkedValue`` (format 3)."""
- self.labelNames: MutableMapping[str, str] = labelNames or {}
- """User-facing translations of this location's label. Keyed by
- ``xml:lang`` code.
- """
-
- def getFormat(self) -> int:
- """Determine which format of STAT Axis value to use to encode this label.
-
- =========== ========= =========== =========== ===============
- STAT Format userValue userMinimum userMaximum linkedUserValue
- =========== ========= =========== =========== ===============
- 1 ✅ ❌ ❌ ❌
- 2 ✅ ✅ ✅ ❌
- 3 ✅ ❌ ❌ ✅
- =========== ========= =========== =========== ===============
- """
- if self.linkedUserValue is not None:
- return 3
- if self.userMinimum is not None or self.userMaximum is not None:
- return 2
- return 1
-
- @property
- def defaultName(self) -> str:
- """Return the English name from :attr:`labelNames` or the :attr:`name`."""
- return self.labelNames.get("en") or self.name
-
-
-class LocationLabelDescriptor(SimpleDescriptor):
- """Container for location label data.
-
- Analogue of OpenType's STAT data for a free-floating location (format 4).
- All values are user values.
-
- See: `OTSpec STAT Axis value table, format 4 `_
-
- .. versionadded:: 5.0
- """
-
- flavor = "label"
- _attrs = ("name", "elidable", "olderSibling", "userLocation", "labelNames")
-
- def __init__(
- self,
- *,
- name,
- userLocation,
- elidable=False,
- olderSibling=False,
- labelNames=None,
- ):
- self.name: str = name
- """Label for this named location, STAT field ``valueNameID``."""
- self.userLocation: SimpleLocationDict = userLocation or {}
- """Location in user coordinates along each axis.
-
- If an axis is not mentioned, it is assumed to be at its default location.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullUserLocation`
- """
- self.elidable: bool = elidable
- """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``.
-
- See: `OTSpec STAT Flags `_
- """
- self.olderSibling: bool = olderSibling
- """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``.
-
- See: `OTSpec STAT Flags `_
- """
- self.labelNames: Dict[str, str] = labelNames or {}
- """User-facing translations of this location's label. Keyed by
- xml:lang code.
- """
-
- @property
- def defaultName(self) -> str:
- """Return the English name from :attr:`labelNames` or the :attr:`name`."""
- return self.labelNames.get("en") or self.name
-
- def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict:
- """Get the complete user location of this label, by combining data
- from the explicit user location and default axis values.
-
- .. versionadded:: 5.0
- """
- return {
- axis.name: self.userLocation.get(axis.name, axis.default)
- for axis in doc.axes
- }
-
-
-class VariableFontDescriptor(SimpleDescriptor):
- """Container for variable fonts, sub-spaces of the Designspace.
-
- Use-cases:
-
- - From a single DesignSpace with discrete axes, define 1 variable font
- per value on the discrete axes. Before version 5, you would have needed
- 1 DesignSpace per such variable font, and a lot of data duplication.
- - From a big variable font with many axes, define subsets of that variable
- font that only include some axes and freeze other axes at a given location.
-
- .. versionadded:: 5.0
- """
-
- flavor = "variable-font"
- _attrs = ("filename", "axisSubsets", "lib")
-
- filename = posixpath_property("_filename")
-
- def __init__(self, *, name, filename=None, axisSubsets=None, lib=None):
- self.name: str = name
- """string, required. Name of this variable to identify it during the
- build process and from other parts of the document, and also as a
- filename in case the filename property is empty.
-
- VarLib.
- """
- self.filename: str = filename
- """string, optional. Relative path to the variable font file, **as it is
- in the document**. The file may or may not exist.
-
- If not specified, the :attr:`name` will be used as a basename for the file.
- """
- self.axisSubsets: List[
- Union[RangeAxisSubsetDescriptor, ValueAxisSubsetDescriptor]
- ] = (axisSubsets or [])
- """Axis subsets to include in this variable font.
-
- If an axis is not mentioned, assume that we only want the default
- location of that axis (same as a :class:`ValueAxisSubsetDescriptor`).
- """
- self.lib: MutableMapping[str, Any] = lib or {}
- """Custom data associated with this variable font."""
-
-
-class RangeAxisSubsetDescriptor(SimpleDescriptor):
- """Subset of a continuous axis to include in a variable font.
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis-subset"
- _attrs = ("name", "userMinimum", "userDefault", "userMaximum")
-
- def __init__(
- self, *, name, userMinimum=-math.inf, userDefault=None, userMaximum=math.inf
- ):
- self.name: str = name
- """Name of the :class:`AxisDescriptor` to subset."""
- self.userMinimum: float = userMinimum
- """New minimum value of the axis in the target variable font.
- If not specified, assume the same minimum value as the full axis.
- (default = ``-math.inf``)
- """
- self.userDefault: Optional[float] = userDefault
- """New default value of the axis in the target variable font.
- If not specified, assume the same default value as the full axis.
- (default = ``None``)
- """
- self.userMaximum: float = userMaximum
- """New maximum value of the axis in the target variable font.
- If not specified, assume the same maximum value as the full axis.
- (default = ``math.inf``)
- """
-
-
-class ValueAxisSubsetDescriptor(SimpleDescriptor):
- """Single value of a discrete or continuous axis to use in a variable font.
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis-subset"
- _attrs = ("name", "userValue")
-
- def __init__(self, *, name, userValue):
- self.name: str = name
- """Name of the :class:`AxisDescriptor` or :class:`DiscreteAxisDescriptor`
- to "snapshot" or "freeze".
- """
- self.userValue: float = userValue
- """Value in user coordinates at which to freeze the given axis."""
-
-
-class BaseDocWriter(object):
- _whiteSpace = " "
- axisDescriptorClass = AxisDescriptor
- discreteAxisDescriptorClass = DiscreteAxisDescriptor
- axisLabelDescriptorClass = AxisLabelDescriptor
- axisMappingDescriptorClass = AxisMappingDescriptor
- locationLabelDescriptorClass = LocationLabelDescriptor
- ruleDescriptorClass = RuleDescriptor
- sourceDescriptorClass = SourceDescriptor
- variableFontDescriptorClass = VariableFontDescriptor
- valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor
- rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor
- instanceDescriptorClass = InstanceDescriptor
-
- @classmethod
- def getAxisDecriptor(cls):
- return cls.axisDescriptorClass()
-
- @classmethod
- def getAxisMappingDescriptor(cls):
- return cls.axisMappingDescriptorClass()
-
- @classmethod
- def getSourceDescriptor(cls):
- return cls.sourceDescriptorClass()
-
- @classmethod
- def getInstanceDescriptor(cls):
- return cls.instanceDescriptorClass()
-
- @classmethod
- def getRuleDescriptor(cls):
- return cls.ruleDescriptorClass()
-
- def __init__(self, documentPath, documentObject: DesignSpaceDocument):
- self.path = documentPath
- self.documentObject = documentObject
- self.effectiveFormatTuple = self._getEffectiveFormatTuple()
- self.root = ET.Element("designspace")
-
- def write(self, pretty=True, encoding="UTF-8", xml_declaration=True):
- self.root.attrib["format"] = ".".join(str(i) for i in self.effectiveFormatTuple)
-
- if (
- self.documentObject.axes
- or self.documentObject.axisMappings
- or self.documentObject.elidedFallbackName is not None
- ):
- axesElement = ET.Element("axes")
- if self.documentObject.elidedFallbackName is not None:
- axesElement.attrib[
- "elidedfallbackname"
- ] = self.documentObject.elidedFallbackName
- self.root.append(axesElement)
- for axisObject in self.documentObject.axes:
- self._addAxis(axisObject)
-
- if self.documentObject.axisMappings:
- mappingsElement = ET.Element("mappings")
- self.root.findall(".axes")[0].append(mappingsElement)
- for mappingObject in self.documentObject.axisMappings:
- self._addAxisMapping(mappingsElement, mappingObject)
-
- if self.documentObject.locationLabels:
- labelsElement = ET.Element("labels")
- for labelObject in self.documentObject.locationLabels:
- self._addLocationLabel(labelsElement, labelObject)
- self.root.append(labelsElement)
-
- if self.documentObject.rules:
- if getattr(self.documentObject, "rulesProcessingLast", False):
- attributes = {"processing": "last"}
- else:
- attributes = {}
- self.root.append(ET.Element("rules", attributes))
- for ruleObject in self.documentObject.rules:
- self._addRule(ruleObject)
-
- if self.documentObject.sources:
- self.root.append(ET.Element("sources"))
- for sourceObject in self.documentObject.sources:
- self._addSource(sourceObject)
-
- if self.documentObject.variableFonts:
- variableFontsElement = ET.Element("variable-fonts")
- for variableFont in self.documentObject.variableFonts:
- self._addVariableFont(variableFontsElement, variableFont)
- self.root.append(variableFontsElement)
-
- if self.documentObject.instances:
- self.root.append(ET.Element("instances"))
- for instanceObject in self.documentObject.instances:
- self._addInstance(instanceObject)
-
- if self.documentObject.lib:
- self._addLib(self.root, self.documentObject.lib, 2)
-
- tree = ET.ElementTree(self.root)
- tree.write(
- self.path,
- encoding=encoding,
- method="xml",
- xml_declaration=xml_declaration,
- pretty_print=pretty,
- )
-
- def _getEffectiveFormatTuple(self):
- """Try to use the version specified in the document, or a sufficiently
- recent version to be able to encode what the document contains.
- """
- minVersion = self.documentObject.formatTuple
- if (
- any(
- hasattr(axis, "values")
- or axis.axisOrdering is not None
- or axis.axisLabels
- for axis in self.documentObject.axes
- )
- or self.documentObject.locationLabels
- or any(source.localisedFamilyName for source in self.documentObject.sources)
- or self.documentObject.variableFonts
- or any(
- instance.locationLabel or instance.userLocation
- for instance in self.documentObject.instances
- )
- ):
- if minVersion < (5, 0):
- minVersion = (5, 0)
- if self.documentObject.axisMappings:
- if minVersion < (5, 1):
- minVersion = (5, 1)
- return minVersion
-
- def _makeLocationElement(self, locationObject, name=None):
- """Convert Location dict to a locationElement."""
- locElement = ET.Element("location")
- if name is not None:
- locElement.attrib["name"] = name
- validatedLocation = self.documentObject.newDefaultLocation()
- for axisName, axisValue in locationObject.items():
- if axisName in validatedLocation:
- # only accept values we know
- validatedLocation[axisName] = axisValue
- for dimensionName, dimensionValue in validatedLocation.items():
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = dimensionName
- if type(dimensionValue) == tuple:
- dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue[0])
- dimElement.attrib["yvalue"] = self.intOrFloat(dimensionValue[1])
- else:
- dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue)
- locElement.append(dimElement)
- return locElement, validatedLocation
-
- def intOrFloat(self, num):
- if int(num) == num:
- return "%d" % num
- return ("%f" % num).rstrip("0").rstrip(".")
-
- def _addRule(self, ruleObject):
- # if none of the conditions have minimum or maximum values, do not add the rule.
- ruleElement = ET.Element("rule")
- if ruleObject.name is not None:
- ruleElement.attrib["name"] = ruleObject.name
- for conditions in ruleObject.conditionSets:
- conditionsetElement = ET.Element("conditionset")
- for cond in conditions:
- if cond.get("minimum") is None and cond.get("maximum") is None:
- # neither is defined, don't add this condition
- continue
- conditionElement = ET.Element("condition")
- conditionElement.attrib["name"] = cond.get("name")
- if cond.get("minimum") is not None:
- conditionElement.attrib["minimum"] = self.intOrFloat(
- cond.get("minimum")
- )
- if cond.get("maximum") is not None:
- conditionElement.attrib["maximum"] = self.intOrFloat(
- cond.get("maximum")
- )
- conditionsetElement.append(conditionElement)
- if len(conditionsetElement):
- ruleElement.append(conditionsetElement)
- for sub in ruleObject.subs:
- subElement = ET.Element("sub")
- subElement.attrib["name"] = sub[0]
- subElement.attrib["with"] = sub[1]
- ruleElement.append(subElement)
- if len(ruleElement):
- self.root.findall(".rules")[0].append(ruleElement)
-
- def _addAxis(self, axisObject):
- axisElement = ET.Element("axis")
- axisElement.attrib["tag"] = axisObject.tag
- axisElement.attrib["name"] = axisObject.name
- self._addLabelNames(axisElement, axisObject.labelNames)
- if axisObject.map:
- for inputValue, outputValue in axisObject.map:
- mapElement = ET.Element("map")
- mapElement.attrib["input"] = self.intOrFloat(inputValue)
- mapElement.attrib["output"] = self.intOrFloat(outputValue)
- axisElement.append(mapElement)
- if axisObject.axisOrdering or axisObject.axisLabels:
- labelsElement = ET.Element("labels")
- if axisObject.axisOrdering is not None:
- labelsElement.attrib["ordering"] = str(axisObject.axisOrdering)
- for label in axisObject.axisLabels:
- self._addAxisLabel(labelsElement, label)
- axisElement.append(labelsElement)
- if hasattr(axisObject, "minimum"):
- axisElement.attrib["minimum"] = self.intOrFloat(axisObject.minimum)
- axisElement.attrib["maximum"] = self.intOrFloat(axisObject.maximum)
- elif hasattr(axisObject, "values"):
- axisElement.attrib["values"] = " ".join(
- self.intOrFloat(v) for v in axisObject.values
- )
- axisElement.attrib["default"] = self.intOrFloat(axisObject.default)
- if axisObject.hidden:
- axisElement.attrib["hidden"] = "1"
- self.root.findall(".axes")[0].append(axisElement)
-
- def _addAxisMapping(self, mappingsElement, mappingObject):
- mappingElement = ET.Element("mapping")
- for what in ("inputLocation", "outputLocation"):
- whatObject = getattr(mappingObject, what, None)
- if whatObject is None:
- continue
- whatElement = ET.Element(what[:-8])
- mappingElement.append(whatElement)
-
- for name, value in whatObject.items():
- dimensionElement = ET.Element("dimension")
- dimensionElement.attrib["name"] = name
- dimensionElement.attrib["xvalue"] = self.intOrFloat(value)
- whatElement.append(dimensionElement)
-
- mappingsElement.append(mappingElement)
-
- def _addAxisLabel(
- self, axisElement: ET.Element, label: AxisLabelDescriptor
- ) -> None:
- labelElement = ET.Element("label")
- labelElement.attrib["uservalue"] = self.intOrFloat(label.userValue)
- if label.userMinimum is not None:
- labelElement.attrib["userminimum"] = self.intOrFloat(label.userMinimum)
- if label.userMaximum is not None:
- labelElement.attrib["usermaximum"] = self.intOrFloat(label.userMaximum)
- labelElement.attrib["name"] = label.name
- if label.elidable:
- labelElement.attrib["elidable"] = "true"
- if label.olderSibling:
- labelElement.attrib["oldersibling"] = "true"
- if label.linkedUserValue is not None:
- labelElement.attrib["linkeduservalue"] = self.intOrFloat(
- label.linkedUserValue
- )
- self._addLabelNames(labelElement, label.labelNames)
- axisElement.append(labelElement)
-
- def _addLabelNames(self, parentElement, labelNames):
- for languageCode, labelName in sorted(labelNames.items()):
- languageElement = ET.Element("labelname")
- languageElement.attrib[XML_LANG] = languageCode
- languageElement.text = labelName
- parentElement.append(languageElement)
-
- def _addLocationLabel(
- self, parentElement: ET.Element, label: LocationLabelDescriptor
- ) -> None:
- labelElement = ET.Element("label")
- labelElement.attrib["name"] = label.name
- if label.elidable:
- labelElement.attrib["elidable"] = "true"
- if label.olderSibling:
- labelElement.attrib["oldersibling"] = "true"
- self._addLabelNames(labelElement, label.labelNames)
- self._addLocationElement(labelElement, userLocation=label.userLocation)
- parentElement.append(labelElement)
-
- def _addLocationElement(
- self,
- parentElement,
- *,
- designLocation: AnisotropicLocationDict = None,
- userLocation: SimpleLocationDict = None,
- ):
- locElement = ET.Element("location")
- for axis in self.documentObject.axes:
- if designLocation is not None and axis.name in designLocation:
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = axis.name
- value = designLocation[axis.name]
- if isinstance(value, tuple):
- dimElement.attrib["xvalue"] = self.intOrFloat(value[0])
- dimElement.attrib["yvalue"] = self.intOrFloat(value[1])
- else:
- dimElement.attrib["xvalue"] = self.intOrFloat(value)
- locElement.append(dimElement)
- elif userLocation is not None and axis.name in userLocation:
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = axis.name
- value = userLocation[axis.name]
- dimElement.attrib["uservalue"] = self.intOrFloat(value)
- locElement.append(dimElement)
- if len(locElement) > 0:
- parentElement.append(locElement)
-
- def _addInstance(self, instanceObject):
- instanceElement = ET.Element("instance")
- if instanceObject.name is not None:
- instanceElement.attrib["name"] = instanceObject.name
- if instanceObject.locationLabel is not None:
- instanceElement.attrib["location"] = instanceObject.locationLabel
- if instanceObject.familyName is not None:
- instanceElement.attrib["familyname"] = instanceObject.familyName
- if instanceObject.styleName is not None:
- instanceElement.attrib["stylename"] = instanceObject.styleName
- # add localisations
- if instanceObject.localisedStyleName:
- languageCodes = list(instanceObject.localisedStyleName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedStyleNameElement = ET.Element("stylename")
- localisedStyleNameElement.attrib[XML_LANG] = code
- localisedStyleNameElement.text = instanceObject.getStyleName(code)
- instanceElement.append(localisedStyleNameElement)
- if instanceObject.localisedFamilyName:
- languageCodes = list(instanceObject.localisedFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedFamilyNameElement = ET.Element("familyname")
- localisedFamilyNameElement.attrib[XML_LANG] = code
- localisedFamilyNameElement.text = instanceObject.getFamilyName(code)
- instanceElement.append(localisedFamilyNameElement)
- if instanceObject.localisedStyleMapStyleName:
- languageCodes = list(instanceObject.localisedStyleMapStyleName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue
- localisedStyleMapStyleNameElement = ET.Element("stylemapstylename")
- localisedStyleMapStyleNameElement.attrib[XML_LANG] = code
- localisedStyleMapStyleNameElement.text = (
- instanceObject.getStyleMapStyleName(code)
- )
- instanceElement.append(localisedStyleMapStyleNameElement)
- if instanceObject.localisedStyleMapFamilyName:
- languageCodes = list(instanceObject.localisedStyleMapFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue
- localisedStyleMapFamilyNameElement = ET.Element("stylemapfamilyname")
- localisedStyleMapFamilyNameElement.attrib[XML_LANG] = code
- localisedStyleMapFamilyNameElement.text = (
- instanceObject.getStyleMapFamilyName(code)
- )
- instanceElement.append(localisedStyleMapFamilyNameElement)
-
- if self.effectiveFormatTuple >= (5, 0):
- if instanceObject.locationLabel is None:
- self._addLocationElement(
- instanceElement,
- designLocation=instanceObject.designLocation,
- userLocation=instanceObject.userLocation,
- )
- else:
- # Pre-version 5.0 code was validating and filling in the location
- # dict while writing it out, as preserved below.
- if instanceObject.location is not None:
- locationElement, instanceObject.location = self._makeLocationElement(
- instanceObject.location
- )
- instanceElement.append(locationElement)
- if instanceObject.filename is not None:
- instanceElement.attrib["filename"] = instanceObject.filename
- if instanceObject.postScriptFontName is not None:
- instanceElement.attrib[
- "postscriptfontname"
- ] = instanceObject.postScriptFontName
- if instanceObject.styleMapFamilyName is not None:
- instanceElement.attrib[
- "stylemapfamilyname"
- ] = instanceObject.styleMapFamilyName
- if instanceObject.styleMapStyleName is not None:
- instanceElement.attrib[
- "stylemapstylename"
- ] = instanceObject.styleMapStyleName
- if self.effectiveFormatTuple < (5, 0):
- # Deprecated members as of version 5.0
- if instanceObject.glyphs:
- if instanceElement.findall(".glyphs") == []:
- glyphsElement = ET.Element("glyphs")
- instanceElement.append(glyphsElement)
- glyphsElement = instanceElement.findall(".glyphs")[0]
- for glyphName, data in sorted(instanceObject.glyphs.items()):
- glyphElement = self._writeGlyphElement(
- instanceElement, instanceObject, glyphName, data
- )
- glyphsElement.append(glyphElement)
- if instanceObject.kerning:
- kerningElement = ET.Element("kerning")
- instanceElement.append(kerningElement)
- if instanceObject.info:
- infoElement = ET.Element("info")
- instanceElement.append(infoElement)
- self._addLib(instanceElement, instanceObject.lib, 4)
- self.root.findall(".instances")[0].append(instanceElement)
-
- def _addSource(self, sourceObject):
- sourceElement = ET.Element("source")
- if sourceObject.filename is not None:
- sourceElement.attrib["filename"] = sourceObject.filename
- if sourceObject.name is not None:
- if sourceObject.name.find("temp_master") != 0:
- # do not save temporary source names
- sourceElement.attrib["name"] = sourceObject.name
- if sourceObject.familyName is not None:
- sourceElement.attrib["familyname"] = sourceObject.familyName
- if sourceObject.styleName is not None:
- sourceElement.attrib["stylename"] = sourceObject.styleName
- if sourceObject.layerName is not None:
- sourceElement.attrib["layer"] = sourceObject.layerName
- if sourceObject.localisedFamilyName:
- languageCodes = list(sourceObject.localisedFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedFamilyNameElement = ET.Element("familyname")
- localisedFamilyNameElement.attrib[XML_LANG] = code
- localisedFamilyNameElement.text = sourceObject.getFamilyName(code)
- sourceElement.append(localisedFamilyNameElement)
- if sourceObject.copyLib:
- libElement = ET.Element("lib")
- libElement.attrib["copy"] = "1"
- sourceElement.append(libElement)
- if sourceObject.copyGroups:
- groupsElement = ET.Element("groups")
- groupsElement.attrib["copy"] = "1"
- sourceElement.append(groupsElement)
- if sourceObject.copyFeatures:
- featuresElement = ET.Element("features")
- featuresElement.attrib["copy"] = "1"
- sourceElement.append(featuresElement)
- if sourceObject.copyInfo or sourceObject.muteInfo:
- infoElement = ET.Element("info")
- if sourceObject.copyInfo:
- infoElement.attrib["copy"] = "1"
- if sourceObject.muteInfo:
- infoElement.attrib["mute"] = "1"
- sourceElement.append(infoElement)
- if sourceObject.muteKerning:
- kerningElement = ET.Element("kerning")
- kerningElement.attrib["mute"] = "1"
- sourceElement.append(kerningElement)
- if sourceObject.mutedGlyphNames:
- for name in sourceObject.mutedGlyphNames:
- glyphElement = ET.Element("glyph")
- glyphElement.attrib["name"] = name
- glyphElement.attrib["mute"] = "1"
- sourceElement.append(glyphElement)
- if self.effectiveFormatTuple >= (5, 0):
- self._addLocationElement(
- sourceElement, designLocation=sourceObject.location
- )
- else:
- # Pre-version 5.0 code was validating and filling in the location
- # dict while writing it out, as preserved below.
- locationElement, sourceObject.location = self._makeLocationElement(
- sourceObject.location
- )
- sourceElement.append(locationElement)
- self.root.findall(".sources")[0].append(sourceElement)
-
- def _addVariableFont(
- self, parentElement: ET.Element, vf: VariableFontDescriptor
- ) -> None:
- vfElement = ET.Element("variable-font")
- vfElement.attrib["name"] = vf.name
- if vf.filename is not None:
- vfElement.attrib["filename"] = vf.filename
- if vf.axisSubsets:
- subsetsElement = ET.Element("axis-subsets")
- for subset in vf.axisSubsets:
- subsetElement = ET.Element("axis-subset")
- subsetElement.attrib["name"] = subset.name
- # Mypy doesn't support narrowing union types via hasattr()
- # https://mypy.readthedocs.io/en/stable/type_narrowing.html
- # TODO(Python 3.10): use TypeGuard
- if hasattr(subset, "userMinimum"):
- subset = cast(RangeAxisSubsetDescriptor, subset)
- if subset.userMinimum != -math.inf:
- subsetElement.attrib["userminimum"] = self.intOrFloat(
- subset.userMinimum
- )
- if subset.userMaximum != math.inf:
- subsetElement.attrib["usermaximum"] = self.intOrFloat(
- subset.userMaximum
- )
- if subset.userDefault is not None:
- subsetElement.attrib["userdefault"] = self.intOrFloat(
- subset.userDefault
- )
- elif hasattr(subset, "userValue"):
- subset = cast(ValueAxisSubsetDescriptor, subset)
- subsetElement.attrib["uservalue"] = self.intOrFloat(
- subset.userValue
- )
- subsetsElement.append(subsetElement)
- vfElement.append(subsetsElement)
- self._addLib(vfElement, vf.lib, 4)
- parentElement.append(vfElement)
-
- def _addLib(self, parentElement: ET.Element, data: Any, indent_level: int) -> None:
- if not data:
- return
- libElement = ET.Element("lib")
- libElement.append(plistlib.totree(data, indent_level=indent_level))
- parentElement.append(libElement)
-
- def _writeGlyphElement(self, instanceElement, instanceObject, glyphName, data):
- glyphElement = ET.Element("glyph")
- if data.get("mute"):
- glyphElement.attrib["mute"] = "1"
- if data.get("unicodes") is not None:
- glyphElement.attrib["unicode"] = " ".join(
- [hex(u) for u in data.get("unicodes")]
- )
- if data.get("instanceLocation") is not None:
- locationElement, data["instanceLocation"] = self._makeLocationElement(
- data.get("instanceLocation")
- )
- glyphElement.append(locationElement)
- if glyphName is not None:
- glyphElement.attrib["name"] = glyphName
- if data.get("note") is not None:
- noteElement = ET.Element("note")
- noteElement.text = data.get("note")
- glyphElement.append(noteElement)
- if data.get("masters") is not None:
- mastersElement = ET.Element("masters")
- for m in data.get("masters"):
- masterElement = ET.Element("master")
- if m.get("glyphName") is not None:
- masterElement.attrib["glyphname"] = m.get("glyphName")
- if m.get("font") is not None:
- masterElement.attrib["source"] = m.get("font")
- if m.get("location") is not None:
- locationElement, m["location"] = self._makeLocationElement(
- m.get("location")
- )
- masterElement.append(locationElement)
- mastersElement.append(masterElement)
- glyphElement.append(mastersElement)
- return glyphElement
-
-
-class BaseDocReader(LogMixin):
- axisDescriptorClass = AxisDescriptor
- discreteAxisDescriptorClass = DiscreteAxisDescriptor
- axisLabelDescriptorClass = AxisLabelDescriptor
- axisMappingDescriptorClass = AxisMappingDescriptor
- locationLabelDescriptorClass = LocationLabelDescriptor
- ruleDescriptorClass = RuleDescriptor
- sourceDescriptorClass = SourceDescriptor
- variableFontsDescriptorClass = VariableFontDescriptor
- valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor
- rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor
- instanceDescriptorClass = InstanceDescriptor
-
- def __init__(self, documentPath, documentObject):
- self.path = documentPath
- self.documentObject = documentObject
- tree = ET.parse(self.path)
- self.root = tree.getroot()
- self.documentObject.formatVersion = self.root.attrib.get("format", "3.0")
- self._axes = []
- self.rules = []
- self.sources = []
- self.instances = []
- self.axisDefaults = {}
- self._strictAxisNames = True
-
- @classmethod
- def fromstring(cls, string, documentObject):
- f = BytesIO(tobytes(string, encoding="utf-8"))
- self = cls(f, documentObject)
- self.path = None
- return self
-
- def read(self):
- self.readAxes()
- self.readLabels()
- self.readRules()
- self.readVariableFonts()
- self.readSources()
- self.readInstances()
- self.readLib()
-
- def readRules(self):
- # we also need to read any conditions that are outside of a condition set.
- rules = []
- rulesElement = self.root.find(".rules")
- if rulesElement is not None:
- processingValue = rulesElement.attrib.get("processing", "first")
- if processingValue not in {"first", "last"}:
- raise DesignSpaceDocumentError(
- " processing attribute value is not valid: %r, "
- "expected 'first' or 'last'" % processingValue
- )
- self.documentObject.rulesProcessingLast = processingValue == "last"
- for ruleElement in self.root.findall(".rules/rule"):
- ruleObject = self.ruleDescriptorClass()
- ruleName = ruleObject.name = ruleElement.attrib.get("name")
- # read any stray conditions outside a condition set
- externalConditions = self._readConditionElements(
- ruleElement,
- ruleName,
- )
- if externalConditions:
- ruleObject.conditionSets.append(externalConditions)
- self.log.info(
- "Found stray rule conditions outside a conditionset. "
- "Wrapped them in a new conditionset."
- )
- # read the conditionsets
- for conditionSetElement in ruleElement.findall(".conditionset"):
- conditionSet = self._readConditionElements(
- conditionSetElement,
- ruleName,
- )
- if conditionSet is not None:
- ruleObject.conditionSets.append(conditionSet)
- for subElement in ruleElement.findall(".sub"):
- a = subElement.attrib["name"]
- b = subElement.attrib["with"]
- ruleObject.subs.append((a, b))
- rules.append(ruleObject)
- self.documentObject.rules = rules
-
- def _readConditionElements(self, parentElement, ruleName=None):
- cds = []
- for conditionElement in parentElement.findall(".condition"):
- cd = {}
- cdMin = conditionElement.attrib.get("minimum")
- if cdMin is not None:
- cd["minimum"] = float(cdMin)
- else:
- # will allow these to be None, assume axis.minimum
- cd["minimum"] = None
- cdMax = conditionElement.attrib.get("maximum")
- if cdMax is not None:
- cd["maximum"] = float(cdMax)
- else:
- # will allow these to be None, assume axis.maximum
- cd["maximum"] = None
- cd["name"] = conditionElement.attrib.get("name")
- # # test for things
- if cd.get("minimum") is None and cd.get("maximum") is None:
- raise DesignSpaceDocumentError(
- "condition missing required minimum or maximum in rule"
- + (" '%s'" % ruleName if ruleName is not None else "")
- )
- cds.append(cd)
- return cds
-
- def readAxes(self):
- # read the axes elements, including the warp map.
- axesElement = self.root.find(".axes")
- if axesElement is not None and "elidedfallbackname" in axesElement.attrib:
- self.documentObject.elidedFallbackName = axesElement.attrib[
- "elidedfallbackname"
- ]
- axisElements = self.root.findall(".axes/axis")
- if not axisElements:
- return
- for axisElement in axisElements:
- if (
- self.documentObject.formatTuple >= (5, 0)
- and "values" in axisElement.attrib
- ):
- axisObject = self.discreteAxisDescriptorClass()
- axisObject.values = [
- float(s) for s in axisElement.attrib["values"].split(" ")
- ]
- else:
- axisObject = self.axisDescriptorClass()
- axisObject.minimum = float(axisElement.attrib.get("minimum"))
- axisObject.maximum = float(axisElement.attrib.get("maximum"))
- axisObject.default = float(axisElement.attrib.get("default"))
- axisObject.name = axisElement.attrib.get("name")
- if axisElement.attrib.get("hidden", False):
- axisObject.hidden = True
- axisObject.tag = axisElement.attrib.get("tag")
- for mapElement in axisElement.findall("map"):
- a = float(mapElement.attrib["input"])
- b = float(mapElement.attrib["output"])
- axisObject.map.append((a, b))
- for labelNameElement in axisElement.findall("labelname"):
- # Note: elementtree reads the "xml:lang" attribute name as
- # '{http://www.w3.org/XML/1998/namespace}lang'
- for key, lang in labelNameElement.items():
- if key == XML_LANG:
- axisObject.labelNames[lang] = tostr(labelNameElement.text)
- labelElement = axisElement.find(".labels")
- if labelElement is not None:
- if "ordering" in labelElement.attrib:
- axisObject.axisOrdering = int(labelElement.attrib["ordering"])
- for label in labelElement.findall(".label"):
- axisObject.axisLabels.append(self.readAxisLabel(label))
- self.documentObject.axes.append(axisObject)
- self.axisDefaults[axisObject.name] = axisObject.default
-
- mappingsElement = self.root.find(".axes/mappings")
- self.documentObject.axisMappings = []
- if mappingsElement is not None:
- for mappingElement in mappingsElement.findall("mapping"):
- inputElement = mappingElement.find("input")
- outputElement = mappingElement.find("output")
- inputLoc = {}
- outputLoc = {}
- for dimElement in inputElement.findall(".dimension"):
- name = dimElement.attrib["name"]
- value = float(dimElement.attrib["xvalue"])
- inputLoc[name] = value
- for dimElement in outputElement.findall(".dimension"):
- name = dimElement.attrib["name"]
- value = float(dimElement.attrib["xvalue"])
- outputLoc[name] = value
- axisMappingObject = self.axisMappingDescriptorClass(
- inputLocation=inputLoc, outputLocation=outputLoc
- )
- self.documentObject.axisMappings.append(axisMappingObject)
-
- def readAxisLabel(self, element: ET.Element):
- xml_attrs = {
- "userminimum",
- "uservalue",
- "usermaximum",
- "name",
- "elidable",
- "oldersibling",
- "linkeduservalue",
- }
- unknown_attrs = set(element.attrib) - xml_attrs
- if unknown_attrs:
- raise DesignSpaceDocumentError(
- f"label element contains unknown attributes: {', '.join(unknown_attrs)}"
- )
-
- name = element.get("name")
- if name is None:
- raise DesignSpaceDocumentError("label element must have a name attribute.")
- valueStr = element.get("uservalue")
- if valueStr is None:
- raise DesignSpaceDocumentError(
- "label element must have a uservalue attribute."
- )
- value = float(valueStr)
- minimumStr = element.get("userminimum")
- minimum = float(minimumStr) if minimumStr is not None else None
- maximumStr = element.get("usermaximum")
- maximum = float(maximumStr) if maximumStr is not None else None
- linkedValueStr = element.get("linkeduservalue")
- linkedValue = float(linkedValueStr) if linkedValueStr is not None else None
- elidable = True if element.get("elidable") == "true" else False
- olderSibling = True if element.get("oldersibling") == "true" else False
- labelNames = {
- lang: label_name.text or ""
- for label_name in element.findall("labelname")
- for attr, lang in label_name.items()
- if attr == XML_LANG
- # Note: elementtree reads the "xml:lang" attribute name as
- # '{http://www.w3.org/XML/1998/namespace}lang'
- }
- return self.axisLabelDescriptorClass(
- name=name,
- userValue=value,
- userMinimum=minimum,
- userMaximum=maximum,
- elidable=elidable,
- olderSibling=olderSibling,
- linkedUserValue=linkedValue,
- labelNames=labelNames,
- )
-
- def readLabels(self):
- if self.documentObject.formatTuple < (5, 0):
- return
-
- xml_attrs = {"name", "elidable", "oldersibling"}
- for labelElement in self.root.findall(".labels/label"):
- unknown_attrs = set(labelElement.attrib) - xml_attrs
- if unknown_attrs:
- raise DesignSpaceDocumentError(
- f"Label element contains unknown attributes: {', '.join(unknown_attrs)}"
- )
-
- name = labelElement.get("name")
- if name is None:
- raise DesignSpaceDocumentError(
- "label element must have a name attribute."
- )
- designLocation, userLocation = self.locationFromElement(labelElement)
- if designLocation:
- raise DesignSpaceDocumentError(
- f'