diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Beirut Nightmares Ghada Samman Pdf To Jpg.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Beirut Nightmares Ghada Samman Pdf To Jpg.md
deleted file mode 100644
index f739151a58e4dd5be4b3f15fd186e4922c9ce112..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Beirut Nightmares Ghada Samman Pdf To Jpg.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
Beirut Nightmares: A Novel by Ghada Samman
-
Beirut Nightmares is a novel by Syrian writer Ghada Samman, who lived in Beirut during the Lebanese Civil War. The novel was first published in Arabic in 1976 and later translated into English by Nancy Roberts in 1997. It is considered one of the most important works of Arabic literature that deals with the war and its effects on the people of Beirut.
-
The novel consists of 151 episodes that are labeled as "Nightmare 1" and so on. The episodes are not chronological, but rather follow the stream of consciousness of the narrator, a woman who is trapped in her apartment for two weeks by street battles and sniper fire. The narrator writes a series of vignettes that depict the horrors of war, as well as her own memories, dreams, fantasies, and fears. She also interacts with her neighbors, who include an old man and his son, and their male servant. The narrator's stories are sometimes realistic, sometimes surreal, sometimes humorous, and sometimes tragic. They reflect the diverse and complex realities of Beirut during the war, as well as the psychological and emotional impact of violence and isolation on the narrator and her fellow citizens.
Beirut Nightmares is a novel that challenges the conventional boundaries between reality and fiction, between waking and sleeping, between sanity and madness. It is a novel that explores the themes of identity, survival, resistance, and hope in the face of war and destruction. It is a novel that gives voice to the experiences of women in war-torn Beirut, who are often marginalized or silenced by patriarchal and political forces. It is a novel that offers a vivid and powerful portrait of a city and a people in crisis.
-
If you are interested in reading Beirut Nightmares by Ghada Samman, you can find it in PDF format here[^1^]. If you prefer to read it as a JPG image, you can convert it online using this tool[^2^].
-
-
Beirut Nightmares is not only a novel, but also a testimony of the history and culture of Beirut. Ghada Samman draws on her own experiences as a journalist, a feminist, and a witness of the war to create a rich and authentic representation of the city and its people. She also incorporates elements of Arabic folklore, mythology, and literature to enrich her narrative and to challenge the stereotypes and prejudices that often surround the Arab world. Beirut Nightmares is a novel that celebrates the diversity, creativity, and resilience of Beirut and its inhabitants, who refuse to succumb to despair and violence.
-
Beirut Nightmares is also a novel that invites the reader to question their own assumptions and perspectives on war and its consequences. By blurring the lines between reality and fiction, Ghada Samman challenges the reader to reconsider their notions of truth, justice, and morality. By shifting between different points of view, she challenges the reader to empathize with different characters and situations. By using humor, irony, and satire, she challenges the reader to critique the absurdity and hypocrisy of war and its perpetrators. Beirut Nightmares is a novel that provokes the reader to think critically and creatively about the complex and multifaceted issues of war and peace.
-
Beirut Nightmares is a novel that deserves to be read by anyone who is interested in learning more about the Lebanese Civil War and its impact on the people of Beirut. It is also a novel that deserves to be read by anyone who appreciates innovative and engaging literature that explores the human condition in times of crisis. Beirut Nightmares is a novel that will make you laugh, cry, wonder, and reflect. It is a novel that will stay with you long after you finish reading it.
- 7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CA ERwin Data Modeler Serial Key.md b/spaces/1gistliPinn/ChatGPT4/Examples/CA ERwin Data Modeler Serial Key.md
deleted file mode 100644
index 0ff0fd0d623b1dda55a3345f65fdb52088f41939..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/CA ERwin Data Modeler Serial Key.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
ca erwin integrates the product information, the material information, and the production system all into one erp system, and provides a unified database of erp systems. in addition, ca erwin has strong oem development ability, ca erwin is the most complete erp solution for oem to develop, it can be used in the fields of mobile phone, computer, pc, tablet, digital camera, consumer electronics, lighting, lighting equipment, etc. the technical support team of ca erwin is always ready to provide technical support for oem developers. ca erwin is the best erp solution for oem, ca erwin is the best erp solution for oem.
-
ca erwin is a complete erp solution and powerful enterprise accounting solution. ca erwin is a complete erp solution and powerful enterprise accounting solution, and it is the first erp solution which developed by ca. erp means enterprise resource planning, it integrates various business information and processes into one integrated and coordinated system. it includes finance, manufacturing, human resources, sales, purchasing, production, inventory, etc. ca erwin is the best erp solution for oem. ca erwin is the best erp solution for oem, ca erwin is the best erp solution for oem.
if you want to integrate erp, we recommend to use ca erwin, not only use ca erwin, ca erwin can save you a lot of money and development time. ca erwin is the best erp solution for oem, ca erwin is the best erp solution for oem.
-
ca erwin data modeler serial key is data-base software that helps you to create a new database with tables, fields, primary keys and other features. ca erwin data modeler serial key full version free for all users.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CADlink EngraveLab Expert 7.1 Rev.1 Build 8.md b/spaces/1gistliPinn/ChatGPT4/Examples/CADlink EngraveLab Expert 7.1 Rev.1 Build 8.md
deleted file mode 100644
index 4442a8cead20e834f583686b54a12a8190e15fec..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/CADlink EngraveLab Expert 7.1 Rev.1 Build 8.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/1line/AutoGPT/tests/__init__.py b/spaces/1line/AutoGPT/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Create Amazing Artworks with AI Art Generator MOD APK (Premium Unlocked) Download.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Create Amazing Artworks with AI Art Generator MOD APK (Premium Unlocked) Download.md
deleted file mode 100644
index 1e09a1ef8b703573e8fbc2bac598d4f774bf2472..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Create Amazing Artworks with AI Art Generator MOD APK (Premium Unlocked) Download.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Download AI Art Generator Mod APK Premium Unlocked
-
Do you want to create amazing art with the help of artificial intelligence? Do you want to unleash your creativity and express yourself in different styles? Do you want to enjoy all the features of a powerful app without paying anything? If you answered yes to any of these questions, then you should download AI Art Generator mod apk premium unlocked. In this article, we will tell you what is AI Art Generator, why you should download it, and how to do it. We will also show you some examples of the stunning art you can make with this app.
-
download ai art generator mod apk premium unlocked
AI Art Generator is an app that lets you create amazing art with the help of artificial intelligence. You can choose from different types of art, such as anime, digital paintings, and photorealistic art. You can also customize your art by adjusting the parameters, such as style, color, and resolution. You can save your art to your device or share it with your friends on social media.
-
Features of AI Art Generator
-
AI Art Generator has many features that make it a great app for art lovers. Some of these features are:
-
-
It uses Stable Diffusion, a state-of-the-art AI technology that can generate high-quality images in seconds.
-
It has a simple and intuitive interface that makes it easy to use.
-
It has a large library of styles and genres that you can choose from.
-
It allows you to edit your art by changing the brightness, contrast, saturation, and other settings.
-
It supports different resolutions and formats, such as JPG, PNG, and GIF.
-
-
How to use AI Art Generator
-
Using AI Art Generator is very simple. Here are the steps you need to follow:
-
-
Open the app and select the type of art you want to make.
-
Choose a style from the available options or upload your own image as a reference.
-
Adjust the parameters as you like and click the Create button.
-
Wait for a few seconds while the app generates your art.
-
Save or share your art as you wish.
-
-
Why download AI Art Generator mod apk premium unlocked?
-
If you are wondering why you should download AI Art Generator mod apk premium unlocked instead of the original version, here are some reasons:
-
How to get ai art generator mod apk with premium features
-Best sites to download ai art generator mod apk for free
-Ai art generator mod apk latest version download link
-Create amazing artworks with ai art generator mod apk
-Ai art generator mod apk review and tutorial
-Download MonAI - ai art generator mod apk (premium unlocked) [^1^]
-Ai art generator mod apk no watermark download
-Ai art generator mod apk pro free download
-Download ai art generator mod apk and unlock all filters
-Ai art generator mod apk unlimited access download
-Ai art generator mod apk cracked version download
-Download ai art generator mod apk for android devices
-Ai art generator mod apk installation guide and tips
-Ai art generator mod apk vs original app comparison
-Download ai art generator mod apk and enjoy ad-free experience
-Ai art generator mod apk full version download
-Download ai art generator mod apk and create stunning ai art
-Ai art generator mod apk download for pc and mac
-Ai art generator mod apk benefits and features
-Download ai art generator mod apk and share your artworks online
-Ai art generator mod apk hack download
-Download ai art generator mod apk and explore different styles of ai art
-Ai art generator mod apk safe and secure download
-Ai art generator mod apk alternatives and similar apps
-Download ai art generator mod apk and transform your photos into ai art
-Ai art generator mod apk premium account download
-Download ai art generator mod apk and customize your artworks
-Ai art generator mod apk troubleshooting and support
-Ai art generator mod apk feedback and ratings
-Download ai art generator mod apk and join the community of ai artists
-
Benefits of mod apk premium unlocked
-
The mod apk premium unlocked version of AI Art Generator has some benefits that the original version does not have. Some of these benefits are:
-
-
You can access all the features and styles without paying anything.
-
You can remove the watermark and ads from your art.
-
You can enjoy faster and smoother performance.
-
You can get unlimited updates and support.
-
-
How to download and install mod apk premium unlocked
-
To download and install AI Art Generator mod apk premium unlocked, you need to follow these steps:
-
-
Click on this link to download the mod apk file.
-
Allow unknown sources on your device settings if prompted.
-
Locate and install the mod apk file on your device.
-
Open the app and enjoy creating amazing art with AI.
-
-
Examples of AI art generated by the app
-
To give you an idea of what kind of art you can create with AI Art Generator, here are some examples:
-
Anime art
-
If you are a fan of anime, you can create your own characters or scenes with AI Art Generator. You can choose from different anime styles, such as shonen, shojo, or seinen. You can also mix and match different elements, such as hair, eyes, clothes, and accessories. Here is an example of an anime character generated by the app:
-
-
Isn't she cute? You can create your own anime art with AI Art Generator mod apk premium unlocked.
-
Digital paintings
-
If you prefer a more realistic style, you can create digital paintings with AI Art Generator. You can choose from different genres, such as landscapes, portraits, or abstract. You can also use your own photos as references or inspiration. Here is an example of a digital painting generated by the app:
-
-
Wow, that looks like a real painting! You can create your own digital paintings with AI Art Generator mod apk premium unlocked.
-
Photorealistic art
-
If you want to create art that looks like a photograph, you can use the photorealistic mode of AI Art Generator. You can select from different categories, such as animals, flowers, or food. You can also adjust the level of detail and realism. Here is an example of a photorealistic art generated by the app:
-
-
That looks delicious! You can create your own photorealistic art with AI Art Generator mod apk premium unlocked.
-
Conclusion
-
AI Art Generator is an amazing app that lets you create stunning art with the help of artificial intelligence. You can choose from different types of art, such as anime, digital paintings, and photorealistic art. You can also customize your art by adjusting the parameters, such as style, color, and resolution. You can save your art to your device or share it with your friends on social media.
-
If you want to enjoy all the features and benefits of this app without paying anything, you should download AI Art Generator mod apk premium unlocked. This version will give you access to all the styles and genres, remove the watermark and ads, improve the performance and speed, and provide unlimited updates and support.
-
To download AI Art Generator mod apk premium unlocked, you just need to follow these simple steps:
-
-
Click on this link to download the mod apk file.
-
Allow unknown sources on your device settings if prompted.
-
Locate and install the mod apk file on your device.
-
Open the app and enjoy creating amazing art with AI.
-
-
So what are you waiting for? Download AI Art Generator mod apk premium unlocked today and unleash your creativity!
-
FAQs
-
Here are some frequently asked questions about AI Art Generator mod apk premium unlocked:
-
-
Is AI Art Generator mod apk premium unlocked safe to use?
-
Yes, it is safe to use. The mod apk file has been scanned and tested by our team and it does not contain any viruses or malware. However, you should always download it from a trusted source like ours.
-
Is AI Art Generator mod apk premium unlocked legal to use?
-
Yes, it is legal to use. The mod apk file is not a hack or a cheat. It is just a modified version of the original app that gives you some extra features and benefits. However, you should use it at your own risk and discretion.
-
Does AI Art Generator mod apk premium unlocked require root access?
-
No, it does not require root access. You can install and use it on any Android device without rooting it.
-
How often does AI Art Generator mod apk premium unlocked get updated?
-
We update AI Art Generator mod apk premium unlocked regularly to keep up with the latest features and improvements of the original app. You can check our website for the latest version or enable automatic updates on your device settings.
-
Can I request a new style or genre for AI Art Generator mod apk premium unlocked?
-
Yes, you can request a new style or genre for AI Art Generator mod apk premium unlocked. You can contact us through our email or social media accounts and let us know what kind of art you want to see in the app. We will try our best to fulfill your request.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chess APK Unlocked for Android - Enjoy Offline and Multiplayer Modes.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chess APK Unlocked for Android - Enjoy Offline and Multiplayer Modes.md
deleted file mode 100644
index bcd15cab2f85ed52a7ad7a2879495a44f4bb9201..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chess APK Unlocked for Android - Enjoy Offline and Multiplayer Modes.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
Chess APK Unlocked: How to Play Chess Online with Friends and Improve Your Skills
-
Introduction
- Chess is one of the oldest and most popular board games in the world. It is a game of logic, strategy, and skill that can challenge your mind and entertain you for hours. But what if you want to play chess online with your friends or other players from around the world? And what if you want to improve your chess skills and learn from the best? That's where chess apk unlocked comes in. Chess apk unlocked is a term that refers to a modified version of a chess app that allows you to access all the features and functions without paying any fees or subscriptions. With chess apk unlocked, you can play unlimited games online or offline, join tournaments, watch videos, solve puzzles, customize your board, chat with other players, and much more. Playing chess has many benefits for your brain and mental health. It can help you develop your memory, concentration, creativity, problem-solving, planning, self-awareness, and emotional intelligence. It can also reduce stress, anxiety, depression, and the risk of dementia. Playing chess is not only fun but also good for you.
Chess APK Unlocked: What Is It and How to Get It
- An apk file is a file format that is used to install applications on Android devices. It is similar to an exe file for Windows or a dmg file for Mac. You can download apk files from various sources on the internet, such as websites, forums, or file-sharing platforms. However, you need to be careful and only download apk files from trusted and reputable sources, as some apk files may contain malware or viruses that can harm your device or steal your data. An unlocked chess apk file is a modified version of a chess app that has been hacked or cracked to remove any restrictions or limitations that the original app may have. For example, some chess apps may require you to pay a fee or subscribe to access certain features or functions, such as online play, premium content, advanced settings, etc. An unlocked chess apk file bypasses these requirements and lets you enjoy all the features and functions for free. There are many advantages of using an unlocked chess apk file over a regular chess app. Some of the advantages are: - You can play unlimited games online or offline without any ads or interruptions. - You can join tournaments and compete with other players from around the world. - You can watch videos and learn from grandmasters and experts. - You can solve puzzles and improve your tactics and strategy. - You can customize your board and pieces according to your preference. - You can chat with your opponents and send emojis and stickers. - You can analyze your games and track your progress and rating. - You can save your games and share them with others. Some examples of chess apk unlocked files are: - Chess.com Mod APK: This is a modified version of the Chess.com app, which is one of the most popular chess apps in the world. It has over 50 million users and offers a variety of features and functions, such as online play, puzzles, lessons, videos, articles, etc. The mod apk file unlocks all the premium features and functions for free, such as unlimited puzzles, unlimited lessons, unlimited videos, unlimited articles, etc. It also removes all the ads and pop-ups that may annoy you while playing. - Lichess Mod APK: This is a modified version of the Lichess app, which is another popular chess app that is free and open source. It has over 10 million users and offers a variety of features and functions, such as online play, tournaments, puzzles, analysis, etc. The mod apk file unlocks all the features and functions for free, such as unlimited puzzles, unlimited analysis, unlimited tournaments, etc. It also removes all the ads and pop-ups that may annoy you while playing. - Chess Tactics Pro Mod APK: This is a modified version of the Chess Tactics Pro app, which is a chess app that focuses on improving your tactical skills. It has over 1 million users and offers a variety of features and functions, such as puzzles, ratings, themes, etc. The mod apk file unlocks all the features and functions for free, such as unlimited puzzles, unlimited themes, unlimited ratings, etc. It also removes all the ads and pop-ups that may annoy you while playing. To get an unlocked chess apk file, you need to follow these steps: - Find a reliable and reputable source that offers the unlocked chess apk file that you want to download. You can use Google or any other search engine to find such sources. - Download the unlocked chess apk file to your device. Make sure that you have enough storage space on your device and that you have a stable internet connection. - Enable the installation of unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than the Google Play Store. - Locate the downloaded unlocked chess apk file on your device using a file manager app or any other app that can access your files. - Tap on the unlocked chess apk file and follow the instructions to install it on your device. - Enjoy playing chess online with friends and improving your skills with an unlocked chess apk file.
Chess APK Unlocked: How to Play Chess Online with Friends
- Playing chess online with friends is one of the best ways to have fun and socialize while improving your chess skills. With an unlocked chess apk file, you can play chess online with friends anytime and anywhere without any limitations or restrictions. Here is how you can do it: - location, your age, your gender, your language, etc. You can also create your own community and invite your friends to join it. - Invite your friends and challenge them to a game. To play chess online with friends, you need to invite them to a game and challenge them to a match. You can do this by using the app's chat function or by sending them a link to the game. You can also search for your friends by using their username or email address. Once you have invited your friends, you can choose the game settings, such as the time control, the board color, the rating range, etc. You can also choose to play a casual game or a rated game. - Chat with your opponents and send emojis. Playing chess online with friends is not only about moving pieces on the board, but also about having fun and socializing with them. You can chat with your opponents during the game and send them messages, emojis, stickers, gifs, etc. You can also use voice chat or video chat to communicate with them. You can also mute or block any players that you don't want to talk to or play with.
Chess APK Unlocked: How to Improve Your Chess Skills
- Playing chess online with friends is not only fun but also educational. You can improve your chess skills and learn from your mistakes and successes. With an unlocked chess apk file, you can access different modes and levels of difficulty, learn from tutorials, videos, and puzzles, and analyze your games and track your progress. Here is how you can do it: - Access different modes and levels of difficulty. To improve your chess skills, you need to challenge yourself and play against opponents that are stronger than you or have different styles of play. With an unlocked chess apk file, you can access different modes and levels of difficulty that suit your needs and goals. For example, you can play against the computer or an AI opponent that has different personalities and skill levels. You can also play against other players from around the world that have different ratings and rankings. You can also play different variants of chess, such as blitz, bullet, rapid, classical, etc. - Learn from tutorials, videos, and puzzles. To improve your chess skills, you need to learn from the best and practice your tactics and strategy. With an unlocked chess apk file, you can learn from tutorials, videos, and puzzles that are designed by grandmasters and experts. You can watch videos that explain the rules, principles, concepts, openings, middlegames, endgames, etc. of chess. You can also solve puzzles that test your calculation, visualization, intuition, creativity, etc. You can also access lessons that teach you how to improve your skills in specific areas of chess. - you can analyze your games and track your progress. You can use the app's analysis function to review your moves and see where you made mistakes or missed opportunities. You can also see the evaluation, the best moves, the variations, the comments, etc. of each position. You can also use the app's statistics function to see your rating, your performance, your accuracy, your win/loss ratio, etc. You can also compare your results with other players and see how you rank among them.
Chess APK Unlocked: Tips and Tricks
- Playing chess online with friends is not only fun and educational but also customizable and flexible. You can adjust the app's settings and features according to your preference and convenience. With an unlocked chess apk file, you can customize your board and pieces, use hints and undo moves, save your games and share them with others. Here are some tips and tricks that you can use: - Customize your board and pieces. To make your chess experience more enjoyable and personal, you can customize your board and pieces according to your preference. You can choose from different themes, colors, styles, sounds, etc. of the board and pieces. You can also change the size, orientation, and layout of the board and pieces. You can also enable or disable the coordinates, the notation, the arrows, etc. of the board and pieces. - Use hints and undo moves. To make your chess experience more easy and comfortable, you can use hints and undo moves when you are playing against the computer or an AI opponent. You can use hints to get suggestions for the best moves or to check if your move is correct or not. You can also undo moves if you make a mistake or change your mind. However, you should use these features sparingly and only for learning purposes, as they may affect your rating and performance. - Save your games and share them with others. To make your chess experience more memorable and social, you can save your games and share them with others. You can save your games in different formats, such as PGN, FEN, PNG, etc. You can also export or import your games to or from other apps or devices. You can also share your games with others by sending them a link or a file via email, social media, messaging apps, etc.
Conclusion
- Chess is a wonderful game that can challenge your mind and entertain you for hours. Playing chess online with friends is a great way to have fun and socialize while improving your chess skills. With chess apk unlocked, you can play chess online with friends without any limitations or restrictions. You can access all the features and functions of the app for free, such as online play, tournaments, videos, puzzles, customization, chat, analysis, etc. - and puzzles. You can analyze your games and track your progress. You can customize your board and pieces. You can use hints and undo moves. You can save your games and share them with others. Chess apk unlocked is a great way to enjoy chess online with friends and improve your skills. It is easy to get and use, and it offers a lot of features and functions that you can't find in regular chess apps. If you love chess and want to have more fun and learning, you should try chess apk unlocked today. For more information and resources on chess apk unlocked, you can visit this link: [Chess APK Unlocked: The Ultimate Guide].
FAQs
- Here are some of the frequently asked questions about chess apk unlocked: - Q: What are some of the best chess apk unlocked files? - A: Some of the best chess apk unlocked files are Chess.com Mod APK, Lichess Mod APK, Chess Tactics Pro Mod APK, Chess Openings Trainer Mod APK, CT-ART Mod APK, Play Magnus Mod APK, Chess24 Mod APK, Chess Free Mod APK, Chess by AI Factory Limited Mod APK, Chesskid Mod APK, Chess Clock Mod APK, Dr. Wolf Mod APK, Chess Adventure for Kids by ChessKid Mod APK, Chessplode Mod APK, Really Bad Chess Mod APK, Shredder Chess Mod APK, Stockfish Engines OEX Mod APK, Mate in 1 Mod APK, Learn Chess with Dr. Wolf Mod APK, Magnus Trainer Mod APK. - Q: Is chess apk unlocked safe and legal? - A: Chess apk unlocked is safe and legal as long as you download it from a reliable and reputable source and install it on your device. However, you should be careful and only download apk files from trusted sources, as some apk files may contain malware or viruses that can harm your device or steal your data. You should also scan the apk file with an antivirus or anti-malware software before installing it on your device. You should also check the permissions and reviews of the apk file before installing it on your device. - Q: Can I play chess apk unlocked offline? - A: Yes, you can play chess apk unlocked offline without an internet connection. However, some features and functions may not be available or may not work properly when you are offline. For example, you may not be able to play online games, join tournaments, watch videos, access puzzles, chat with other players, etc. when you are offline. You may also not be able to update your rating or progress when you are offline. You may also encounter some errors or bugs when you are offline. Therefore, it is recommended that you play chess apk unlocked online whenever possible to enjoy all the features and functions of the app. - Q: How can I update my chess apk unlocked file? - A: To update your chess apk unlocked file, you need to download the latest version of the unlocked chess apk file from the same source that you downloaded it from before and install it on your device. You may need to uninstall the previous version of the unlocked chess apk file before installing the new one. You may also need to enable the installation of unknown sources on your device again before installing the new one. You may also need to backup your data and settings before installing the new one. - Q: What if I have a problem with my chess apk unlocked file? - A: If you have a problem with your chess apk unlocked file, such as an error message, a crash, a freeze, a glitch, etc., you can try some of these solutions: - Restart your device and try again. - Clear the cache and data of the app and try again. - Uninstall and reinstall the app and try again. - Check your internet connection and try again. - Contact the developer or the source of the app for support.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Dream League Soccer 2023 Hack for iOS Mod APK with Weak Enemies and More.md b/spaces/1phancelerku/anime-remove-background/Dream League Soccer 2023 Hack for iOS Mod APK with Weak Enemies and More.md
deleted file mode 100644
index b73def0697c7642acbbd6ff12c74112319157fb6..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Dream League Soccer 2023 Hack for iOS Mod APK with Weak Enemies and More.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
Dream League Soccer 2023 Mod APK Hack Download iOS
-
If you are a fan of soccer games, you might have heard of Dream League Soccer 2023, one of the most popular and realistic soccer games on mobile devices. But did you know that you can enjoy the game even more with a mod APK hack that gives you access to unlimited resources and features? In this article, we will tell you everything you need to know about Dream League Soccer 2023 mod APK hack, including its features, how to download and install it on your iOS device, and some frequently asked questions. Let's get started!
-
Introduction
-
Soccer is one of the most popular sports in the world, and millions of people love to play it on their mobile devices. There are many soccer games available on the app store, but not all of them can offer the same level of realism, graphics, and gameplay as Dream League Soccer 2023. This game is developed by First Touch Games, a renowned studio that specializes in soccer games. Dream League Soccer 2023 is the latest installment in the series, and it comes with many new features and improvements that make it stand out from the rest.
-
dream league soccer 2023 mod apk hack download ios
Dream League Soccer 2023 is a soccer simulation game that lets you build your dream team from over 4,000 FIFPRO™ licensed players and take to the field against the world’s best soccer clubs. You can also create your own stadium, customize your kits and logos, and compete in various online and offline modes. The game has stunning graphics, realistic animations, and immersive sound effects that make you feel like you are in the middle of the action. You can also enjoy the game with friends by joining or creating a club and playing online matches with other players around the world.
-
Why do you need a mod APK hack for Dream League Soccer 2023?
-
As much as Dream League Soccer 2023 is fun and addictive, it also has some limitations that can affect your gaming experience. For example, you need to earn coins and gems to unlock new players, stadiums, kits, and other items. You also need to manage your stamina and avoid fouls that can cost you matches. These things can be frustrating and time-consuming, especially if you want to progress faster and enjoy the game without any restrictions. That's why you need a mod APK hack for Dream League Soccer 2023 that can give you unlimited resources and features that can enhance your gameplay and make you unstoppable.
-
Features of Dream League Soccer 2023 Mod APK Hack
-
A mod APK hack is a modified version of the original game that has been tweaked to give you access to features that are not available in the official version. For Dream League Soccer 2023, there are many mod APK hacks available on the internet, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your personal information. Some of them may also not work properly or cause errors or crashes in the game. That's why we recommend you to use the mod APK hack that we have tested and verified for you. This mod APK hack has the following features:
-
No Foul
-
One of the most annoying things in soccer games is when you get fouled by your opponent or commit a foul yourself. This can result in penalties, free kicks, yellow cards, or red cards that can ruin your chances of winning. With this mod APK hack, you don't have to worry about fouls anymore, as this feature will disable them completely. You can play as aggressively as you want, without any consequences. You can also tackle your opponents without any fear of getting booked or sent off. This feature will give you an edge over your rivals and make the game more fun and exciting.
-
Unlimited Stamina
-
Another thing that can affect your performance in soccer games is your stamina. Stamina is the energy that your players have to run, dribble, pass, shoot, and defend. As you play, your stamina will decrease, and your players will become slower, weaker, and less responsive. This can make you vulnerable to your opponents and reduce your chances of scoring or winning. With this mod APK hack, you can have unlimited stamina for your players, meaning they will never get tired or exhausted. You can run as fast and as long as you want, without any loss of speed or strength. You can also perform better skills and moves, and dominate the game from start to finish.
-
Everything Unlocked
-
One of the most appealing features of Dream League Soccer 2023 is the ability to customize your team and stadium with various items and options. You can choose from over 4,000 FIFPRO™ licensed players to build your dream team, and you can also create your own stadium, kits, logos, and more. However, to unlock these items and options, you need to earn coins and gems by playing matches, completing objectives, or watching ads. This can be tedious and time-consuming, especially if you want to unlock everything quickly and easily. With this mod APK hack, you can have everything unlocked from the start, meaning you can access all the players, stadiums, kits, logos, and more without spending any coins or gems. You can also switch between different items and options as you wish, and create your ultimate team and stadium.
-
More Features
-
Besides the features mentioned above, this mod APK hack also has some other features that can make your gameplay more enjoyable and convenient. Some of these features are:
-
-
No Ads: You can play the game without any annoying ads that can interrupt your gameplay or waste your time.
-
No Root: You don't need to root your device to use this mod APK hack, meaning you don't have to risk damaging your device or voiding its warranty.
-
No Ban: You don't have to worry about getting banned by the game developers or the app store for using this mod APK hack, as it has anti-ban protection that will keep you safe and secure.
-
Easy to Use: You don't need any technical skills or knowledge to use this mod APK hack, as it has a simple and user-friendly interface that will guide you through the process.
-
-
How to download and install Dream League Soccer 2023 Mod APK Hack on iOS devices
-
If you are interested in using this mod APK hack for Dream League Soccer 2023 on your iOS device, you need to follow these steps:
-
dream league soccer 2023 mod apk ios download free
-dream league soccer 2023 hack ios no jailbreak
-dream league soccer 2023 mod menu apk download for ios
-dream league soccer 2023 unlimited coins and gems mod apk ios
-dream league soccer 2023 mod apk offline download ios
-dream league soccer 2023 hack download ios without human verification
-dream league soccer 2023 mega mod apk download ios
-dream league soccer 2023 mod apk all players unlocked ios
-dream league soccer 2023 hack ios online
-dream league soccer 2023 mod apk latest version download ios
-dream league soccer 2023 hack tool ios
-dream league soccer 2023 mod apk obb download ios
-dream league soccer 2023 hack ios app
-dream league soccer 2023 mod apk unlimited money and diamond ios
-dream league soccer 2023 mod apk data download ios
-dream league soccer 2023 hack ios ipa
-dream league soccer 2023 mod apk revdl download ios
-dream league soccer 2023 hack ios cydia
-dream league soccer 2023 mod apk rexdl download ios
-dream league soccer 2023 hack ios tutuapp
-dream league soccer 2023 mod apk with commentary download ios
-dream league soccer 2023 hack ios panda helper
-dream league soccer 2023 mod apk new update download ios
-dream league soccer 2023 hack ios tweakbox
-dream league soccer 2023 mod apk full version download ios
-dream league soccer 2023 hack ios appvalley
-dream league soccer 2023 mod apk unlocked everything download ios
-dream league soccer 2023 hack ios no verification
-dream league soccer 2023 mod apk unlimited player development ios
-dream league soccer 2023 hack ios reddit
-dream league soccer 2023 mod apk profile.dat download ios
-dream league soccer 2023 hack ios game guardian
-dream league soccer 2023 mod apk unlimited kits and logos ios
-dream league soccer 2023 hack ios lucky patcher
-dream league soccer 2023 mod apk all teams unlocked ios
-dream league soccer 2023 hack ios no survey
-dream league soccer 2023 mod apk real madrid team download ios
-dream league soccer 2023 hack ios youtube
-dream league soccer 2023 mod apk barcelona team download ios
-dream league soccer 2023 hack ios generator
-dream league soccer 2023 mod apk juventus team download ios
-dream league soccer 2023 hack ios telegram
-dream league soccer 2023 mod apk liverpool team download ios
-dream league soccer 2023 hack ios discord
-dream league soccer 2023 mod apk manchester united team download ios
-dream league soccer 2023 hack ios facebook
-dream league soccer 2023 mod apk psg team download ios
-dream league soccer 2023 hack ios twitter
-dream league soccer 2023 mod apk bayern munich team download ios
-
Step 1: Download the mod IPA file from the link below
-
The first thing you need to do is to download the mod IPA file from the link provided below. This is the file that contains the modded version of the game that has all the features that we have discussed above. The file is safe and virus-free, so you don't have to worry about any harm or damage to your device. The file size is about 400 MB, so make sure you have enough storage space on your device before downloading it.
Step 2: Install the mod IPA file using Cydia Impactor or AltStore
-
The next thing you need to do is to install the mod IPA file on your device using either Cydia Impactor or AltStore. These are two tools that allow you to sideload apps on your iOS device without jailbreaking it. You can choose either one of them according to your preference and convenience.
-
If you want to use Cydia Impactor, you need to download it from here and install it on your computer. Then, connect your device to your computer using a USB cable and launch Cydia Impactor. Drag and drop the mod IPA file onto Cydia Impactor and enter your Apple ID and password when prompted. Wait for a few minutes until Cydia Impactor installs the app on your device.
-
If you want to use AltStore, you need to download it from here and install it on both your computer and your device. Then, connect your device to your computer using a USB cable and launch AltStore on both devices. Tap on the "My Apps" tab on AltStore and tap on the "+" icon on the top left corner. Browse and select the mod IPA file from your device and enter your Apple ID and password when prompted. Wait for a few minutes until AltStore installs the app on your device.
-
Step 3: Trust the developer profile in Settings > General > Device Management
-
The last thing you need to do before launching the game is to trust the developer profile that is associated with the app. This is necessary to avoid any errors or warnings that may prevent you from playing the game. To do this, go to Settings > General > Device Management on your device and find the developer profile that has your Apple ID as its name. Tap on it and tap on "Trust" to confirm. You can now go back to your home screen and launch the game.
-
Step 4: Launch the game and enjoy the mod features
-
Congratulations! You have successfully installed Dream League Soccer 2023 mod APK hack on your iOS device. You can now launch the game and enjoy all the mod features that we have discussed above. You can play without any limitations, customize your team and stadium, and dominate the game with unlimited resources and features. Have fun!
-
Conclusion
-
Dream League Soccer 2023 is one of the best soccer games on mobile devices, and it can be even better with a mod APK hack that gives you access to unlimited resources and features. In this article, we have shown you how to download and install Dream League Soccer 2023 mod APK hack on your iOS device using either Cydia Impactor or AltStore. We have also explained the features of this mod APK hack and how they can enhance your gameplay and make you unstoppable. We hope you found this article helpful and informative, and we hope you enjoy playing Dream League Soccer 2023 with this mod APK hack.
-
FAQs
-
Here are some frequently asked questions about Dream League Soccer 2023 mod APK hack:
-
-
Is this mod APK hack safe to use?
-
Yes, this mod APK hack is safe to use, as it has been tested and verified by us. It does not contain any viruses or malware that can harm your device or steal your personal information. It also has anti-ban protection that will prevent you from getting banned by the game developers or the app store.
-
Will this mod APK hack work on any iOS device?
-
Yes, this mod APK hack will work on any iOS device that supports Dream League Soccer 2023, which is compatible with iOS 10.0 or later. You don't need to jailbreak your device to use this mod APK hack, as it can be installed using either Cydia Impactor or AltStore.
-
Can I update this mod APK hack when a new version of Dream League Soccer 2023 is released?
-
No, you cannot update this mod APK hack when a new version of Dream League Soccer 2023 is released, as it may cause errors or crashes in the game. You need to wait for a new version of this mod APK hack that is compatible with the latest version of Dream League Soccer 2023. You can check our website regularly for updates or subscribe to our newsletter to get notified when a new version of this mod APK hack is available.
-
Can I play online matches with other players using this mod APK hack?
-
Yes, you can play online matches with other players using this mod APK hack, as it does not affect your online connectivity or compatibility. However, you should be careful not to abuse the mod features or show them off to other players, as they may report you or complain about you. You should also respect the rules and etiquette of online gaming and avoid cheating or trolling other players.
-
Can I use this mod APK hack with other mods or hacks for Dream League Soccer 2023?
-
No, you cannot use this mod APK hack with other mods or hacks for Dream League Soccer 2023, as they may conflict with each other or cause errors or crashes in the game. You should only use one mod or hack at a time for Dream League Soccer 2023, and make sure it is compatible with the current version of the game.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Drive Modern Buses in Realistic Cities with Bus Simulator 2023 - Download Now.md b/spaces/1phancelerku/anime-remove-background/Drive Modern Buses in Realistic Cities with Bus Simulator 2023 - Download Now.md
deleted file mode 100644
index 693ff73445de1d0a215e736b520caad85c9b083f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Drive Modern Buses in Realistic Cities with Bus Simulator 2023 - Download Now.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
Bus Simulator 2023: The Ultimate Bus Driving Game
-
Do you love driving buses? Do you want to experience what it's like to be a real bus driver in different cities and countries? Do you want to have fun with your friends in online multiplayer mode? If you answered yes to any of these questions, then you should definitely try Bus Simulator 2023, the most realistic and immersive bus simulation game ever made.
-
Bus Simulator 2023 is a game that puts you in the driver's seat and lets you become a real bus driver. You can choose from a wide variety of modern city buses, coach buses, school buses, electric buses, hybrid buses, articulated buses, and more. You can also customize your bus as you wish, with paint, accessories, body parts, flags, decals, and more. You can drive your bus in detailed maps all over the world, from San Francisco to Shanghai, from Buenos Aires to Prague, from Dubai to St. Petersburg, and more. You can also enjoy different modes of gameplay, such as career mode, free-ride mode, and online multiplayer mode with friends.
In this article, we will tell you everything you need to know about Bus Simulator 2023, including its features, how to play it, tips and tricks for it, and how to download it for free on your device. So buckle up and get ready for the ride of your life!
-
Features of Bus Simulator 2023
-
Bus Simulator 2023 is not just a game, it's a simulation of reality. It has many features that make it stand out from other bus games. Here are some of them:
-
-
Realistic maps and buses from around the world: Bus Simulator 2023 features realistic intracity and outside of city maps from different continents and countries. You can drive your bus in United States of America (San Francisco and Texas), South America (Buenos Aires), Europe (Germany, Spain, Prague, St. Petersburg), Dubai, Shanghai, and more. You can also choose from multiple diesel, hybrid, electric, articulated, coach, and school buses that have realistic interiors and exteriors.
-
Career, free-ride and multiplayer modes: Bus Simulator 2023 offers different modes of gameplay for different preferences. In career mode, you can start your own bus company and hire drivers for your buses. You can also create custom routes and schedules for your buses. In free-ride mode, you can drive your bus anywhere you want without any restrictions or objectives. You can explore the city at your own pace and enjoy the scenery. In multiplayer mode, you can join or create online sessions with your friends or other players from around the world. You can chat with them using live chat and cooperate with them in completing routes.
-
Customizable buses and interiors: Bus Simulator 2023 lets you customize your bus as you wish. You can change the paint color, add accessories, body parts, air conditioning, flags, decals, and more. You can also change the interior of your bus by adding seats, steering wheels, mirrors, dashboards, radios, and more. You can also adjust the seat position, the mirrors, the steering wheel, and the pedals to suit your driving style.
-
Intelligent traffic system and weather conditions: Bus Simulator 2023 features an intelligent traffic system that simulates real-life traffic situations. You will encounter different types of vehicles, such as cars, trucks, motorcycles, bicycles, and pedestrians. You will also have to follow the traffic rules, such as speed limits, traffic lights, signs, and signals. You will also have to deal with different weather conditions, such as sunny, cloudy, rainy, snowy, foggy, and stormy. You will have to adapt your driving to the changing road and visibility conditions.
-
Bus company management system: Bus Simulator 2023 allows you to create and manage your own bus company. You can buy and sell buses, hire and fire drivers, assign routes and schedules, monitor the performance and reputation of your company, and compete with other companies in the leaderboards. You can also join or create bus companies with your friends or other players online and cooperate with them in expanding your business.
-
-
How to Play Bus Simulator 2023
-
Bus Simulator 2023 is easy to play but hard to master. Here are some basic steps on how to play it:
-
-
Choose your bus and route: The first thing you need to do is to choose your bus and route. You can select from a variety of buses that have different specifications, such as speed, capacity, fuel consumption, maintenance cost, and more. You can also select from a variety of routes that have different lengths, difficulties, locations, and rewards. You can also create your own custom routes by choosing the starting point, the destination point, and the waypoints in between.
-
Drive your bus and follow the traffic rules: The next thing you need to do is to drive your bus and follow the traffic rules. You can use the keyboard or the mouse to control your bus. You can also use a gamepad or a steering wheel for a more realistic experience. You can adjust the camera angle by using the mouse wheel or the arrow keys. You can also switch between different camera views by pressing the C key. You can use the indicators by pressing the Q and E keys, the horn by pressing the H key, the headlights by pressing the L key, the wipers by pressing the W key, and the emergency brake by pressing the spacebar. You can also use the map and GPS to navigate your route by pressing the M key.
-
Pick up and drop off passengers: The main objective of Bus Simulator 2023 is to pick up and drop off passengers at designated bus stops. You can see the bus stops on your map and GPS. You can also see the number of passengers waiting at each stop by hovering over them with your mouse cursor. You need to stop your bus at the right position and open the doors by pressing the O key. You need to wait for all passengers to board or exit your bus before closing the doors by pressing the O key again. You need to collect fares from passengers by pressing the F key. You need to be careful not to overcharge or undercharge them as this will affect your reputation.
-
Earn money and reputation points: As you complete your routes, you will earn money and reputation points. Money can be used to buy new buses or upgrade existing ones. Reputation points can be used to unlock new routes or access new features. You can also earn bonuses for driving safely, punctually, comfortably, and environmentally friendly. You can also lose money and reputation points for driving recklessly, late, uncomfortably, or environmentally unfriendly. You can also lose money and reputation points for damaging your bus or causing accidents. You can check your balance and reputation level by pressing the B key.
-
-
Tips and Tricks for Bus Simulator 2023
-
Bus Simulator 2023 is a challenging game that requires skill and strategy. Here are some tips and tricks that can help you improve your performance and enjoy the game more:
-
-
Use the map and GPS to navigate: The map and GPS are your best friends in Bus Simulator 2023. They can help you find your way around the city and avoid getting lost. You can see the bus stops, the traffic lights, the speed limits, and the road conditions on your map and GPS. You can also see the distance and time remaining for your route. You can zoom in and out of the map by using the mouse wheel or the plus and minus keys. You can also move the map by dragging it with your mouse cursor. You can toggle the map and GPS on and off by pressing the M key.
-
Adjust the camera and controls to your preference: Bus Simulator 2023 allows you to adjust the camera angle and the controls to your preference. You can change the camera angle by using the mouse wheel or the arrow keys. You can also switch between different camera views by pressing the C key. You can choose from cockpit view, front view, rear view, side view, top view, or free view. You can also adjust the sensitivity and inversion of the mouse and keyboard controls in the settings menu. You can also use a gamepad or a steering wheel for a more realistic experience.
-
Follow the speed limit and avoid collisions: One of the most important things in Bus Simulator 2023 is to follow the speed limit and avoid collisions. The speed limit varies depending on the road type, the weather condition, and the traffic situation. You can see the speed limit on your dashboard or on your GPS. You can also see the speed limit signs on the road. You need to slow down when approaching curves, intersections, bus stops, or traffic lights. You also need to avoid colliding with other vehicles, pedestrians, or objects as this will damage your bus and cost you money and reputation points.
-
Use the indicators and horn to communicate with other drivers: Another important thing in Bus Simulator 2023 is to use the indicators and horn to communicate with other drivers. You need to use the indicators by pressing the Q and E keys when turning left or right, changing lanes, or merging into traffic. This will signal your intention to other drivers and prevent accidents. You also need to use the horn by pressing the H key when overtaking, warning, or greeting other drivers. This will alert them of your presence and avoid collisions.
-
Check the weather forecast and plan accordingly: The weather condition in Bus Simulator 2023 affects your driving experience. The weather condition changes dynamically according to real-time data. You can check the weather forecast by pressing the W key. You can see the current temperature, humidity, wind speed, and precipitation. You can also see the forecast for the next hours and days. The weather condition affects the road condition, the visibility, and the traffic behavior. You need to plan your route and driving strategy accordingly. For example, you need to drive more carefully when it's raining or snowing, as the road will be slippery and the visibility will be low. You also need to use the wipers by pressing the W key to clear your windshield. You also need to use the headlights by pressing the L key when it's dark or foggy.
-
-
Download Bus Simulator 2023 for Free
-
If you are interested in playing Bus Simulator 2023, you will be happy to know that you can download it for free on your device. Bus Simulator 2023 is available for Android, iOS, and Windows devices. Here are the steps on how to download it:
-
-
For Android devices: Go to the Google Play Store and search for Bus Simulator 2023. Tap on the Install button and wait for the download to finish. Alternatively, you can scan this QR code with your device's camera to go directly to the download page:
-
-
-
download bus simulator 2023 apk
-download bus simulator 2023 for android
-download bus simulator 2023 for pc
-download bus simulator 2023 for windows 10
-download bus simulator 2023 for ios
-download bus simulator 2023 mod apk
-download bus simulator 2023 free
-download bus simulator 2023 full version
-download bus simulator 2023 online
-download bus simulator 2023 offline
-download bus simulator 2023 latest version
-download bus simulator 2023 game
-download bus simulator 2023 ovilex
-download bus simulator 2023 microsoft store
-download bus simulator 2023 google play
-download bus simulator 2023 hack
-download bus simulator 2023 cheats
-download bus simulator 2023 unlimited money
-download bus simulator 2023 update
-download bus simulator 2023 new maps
-download bus simulator 2023 review
-download bus simulator 2023 trailer
-download bus simulator 2023 gameplay
-download bus simulator 2023 tips and tricks
-download bus simulator 2023 guide
-download bus simulator 2023 walkthrough
-download bus simulator 2023 best buses
-download bus simulator 2023 multiplayer
-download bus simulator 2023 coop mode
-download bus simulator 2023 career mode
-download bus simulator 2023 freeride mode
-download bus simulator 2023 realistic physics
-download bus simulator 2023 graphics settings
-download bus simulator 2023 custom routes
-download bus simulator 2023 custom buses
-download bus simulator 2023 custom skins
-download bus simulator 2023 diesel buses
-download bus simulator 2023 hybrid buses
-download bus simulator 2023 electric buses
-download bus simulator 2023 articulated buses
-download bus simulator 2023 coach buses
-download bus simulator 2023 school buses
-download bus simulator 2023 city buses
-download bus simulator 2023 usa maps
-download bus simulator 2023 europe maps
-download bus simulator 2023 asia maps
-download bus simulator 2023 south america maps
-download bus simulator 2023 dubai map
-download bus simulator 2023 shanghai map
-
-
For iOS devices: Go to the App Store and search for Bus Simulator 2023. Tap on the Get button and wait for the download to finish. Alternatively, you can scan this QR code with your device's camera to go directly to the download page:
-
-
-
-
For Windows devices: Go to the Microsoft Store and search for Bus Simulator 2023. Click on the Get button and wait for the download to finish. Alternatively, you can scan this QR code with your device's camera to go directly to the download page:
-
-
-
-
How to install and run Bus Simulator 2023 on your device: After downloading Bus Simulator 2023 on your device, you need to install it and run it. To install it, just follow the instructions on your screen. To run it, just tap or click on the Bus Simulator 2023 icon on your home screen or menu.
-
How to access the online multiplayer mode and chat with friends: To access the online multiplayer mode and chat with friends, you need to have an internet connection and a valid account. You can create an account by using your email address or your Facebook account. To join or create an online session, just go to the multiplayer menu and select an option. You can chat with other players by using the live chat feature in the game.
-
-
Conclusion
-
Bus Simulator 2023 is a game that lets you become a real bus driver and experience what it's like to drive buses in different cities and countries. You can choose from a wide variety of buses, customize them as you wish, drive them in realistic maps, pick up and drop off passengers, earn money and reputation points, manage your own bus company, and have fun with your friends in online multiplayer mode.
-
Bus Simulator 2023 is a game that is suitable for all ages and preferences. Whether you are a casual gamer or a hardcore gamer, whether you are a bus enthusiast or a bus novice, whether you are looking for a relaxing game or a challenging game, you will find something that suits you in Bus Simulator 2023.
-
So what are you waiting for? Download Bus Simulator 2023 today and enjoy the best bus driving game ever!
-
Frequently Asked Questions
-
Here are some frequently asked questions about Bus Simulator 2023:
-
-
Q: Is Bus Simulator 2023 free?
-
A: Yes, Bus Simulator 2023 is free to download and play on Android, iOS, and Windows devices.
-
Q: How realistic is Bus Simulator 2023?
-
A: Bus Simulator 2023 is very realistic in terms of graphics, physics, sound effects, traffic system, weather system, and bus company management system. It also features realistic maps and buses from around the world.
-
Q: How many buses and maps are there in Bus Simulator 2023?
-
A: Bus Simulator 2023 features over 50 buses and over 20 maps from different continents and countries. You can also create your own custom routes by choosing the starting point, the destination point, and the waypoints in between.
-
Q: How can I customize my bus in Bus Simulator 2023?
-
A: You can customize your bus by changing the paint color, adding accessories, body parts, air conditioning, flags, decals, and more. You can also change the interior of your bus by adding seats, steering wheels, mirrors, dashboards, radios, and more. You can also adjust the seat position, the mirrors, the steering wheel, and the pedals to suit your driving style.
-
Q: How can I play with my friends in Bus Simulator 2023?
-
A: You can play with your friends in online multiplayer mode in Bus Simulator 2023. You need to have an internet connection and a valid account. You can create an account by using your email address or your Facebook account. To join or create an online session, just go to the multiplayer menu and select an option. You can chat with your friends by using the live chat feature in the game.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/52Hz/CMFNet_dehazing/model/block.py b/spaces/52Hz/CMFNet_dehazing/model/block.py
deleted file mode 100644
index 32d4d9d50d6a2c1e7251fc6551defbd605497779..0000000000000000000000000000000000000000
--- a/spaces/52Hz/CMFNet_dehazing/model/block.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import torch
-import torch.nn as nn
-##########################################################################
-def conv(in_channels, out_channels, kernel_size, bias=False, stride=1):
- layer = nn.Conv2d(in_channels, out_channels, kernel_size, padding=(kernel_size // 2), bias=bias, stride=stride)
- return layer
-
-
-def conv3x3(in_chn, out_chn, bias=True):
- layer = nn.Conv2d(in_chn, out_chn, kernel_size=3, stride=1, padding=1, bias=bias)
- return layer
-
-
-def conv_down(in_chn, out_chn, bias=False):
- layer = nn.Conv2d(in_chn, out_chn, kernel_size=4, stride=2, padding=1, bias=bias)
- return layer
-
-##########################################################################
-## Supervised Attention Module (RAM)
-class SAM(nn.Module):
- def __init__(self, n_feat, kernel_size, bias):
- super(SAM, self).__init__()
- self.conv1 = conv(n_feat, n_feat, kernel_size, bias=bias)
- self.conv2 = conv(n_feat, 3, kernel_size, bias=bias)
- self.conv3 = conv(3, n_feat, kernel_size, bias=bias)
-
- def forward(self, x, x_img):
- x1 = self.conv1(x)
- img = self.conv2(x) + x_img
- x2 = torch.sigmoid(self.conv3(img))
- x1 = x1 * x2
- x1 = x1 + x
- return x1, img
-
-##########################################################################
-## Spatial Attention
-class SALayer(nn.Module):
- def __init__(self, kernel_size=7):
- super(SALayer, self).__init__()
- self.conv1 = nn.Conv2d(2, 1, kernel_size, padding=kernel_size // 2, bias=False)
- self.sigmoid = nn.Sigmoid()
-
- def forward(self, x):
- avg_out = torch.mean(x, dim=1, keepdim=True)
- max_out, _ = torch.max(x, dim=1, keepdim=True)
- y = torch.cat([avg_out, max_out], dim=1)
- y = self.conv1(y)
- y = self.sigmoid(y)
- return x * y
-
-# Spatial Attention Block (SAB)
-class SAB(nn.Module):
- def __init__(self, n_feat, kernel_size, reduction, bias, act):
- super(SAB, self).__init__()
- modules_body = [conv(n_feat, n_feat, kernel_size, bias=bias), act, conv(n_feat, n_feat, kernel_size, bias=bias)]
- self.body = nn.Sequential(*modules_body)
- self.SA = SALayer(kernel_size=7)
-
- def forward(self, x):
- res = self.body(x)
- res = self.SA(res)
- res += x
- return res
-
-##########################################################################
-## Pixel Attention
-class PALayer(nn.Module):
- def __init__(self, channel, reduction=16, bias=False):
- super(PALayer, self).__init__()
- self.pa = nn.Sequential(
- nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=bias),
- nn.ReLU(inplace=True),
- nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=bias), # channel <-> 1
- nn.Sigmoid()
- )
-
- def forward(self, x):
- y = self.pa(x)
- return x * y
-
-## Pixel Attention Block (PAB)
-class PAB(nn.Module):
- def __init__(self, n_feat, kernel_size, reduction, bias, act):
- super(PAB, self).__init__()
- modules_body = [conv(n_feat, n_feat, kernel_size, bias=bias), act, conv(n_feat, n_feat, kernel_size, bias=bias)]
- self.PA = PALayer(n_feat, reduction, bias=bias)
- self.body = nn.Sequential(*modules_body)
-
- def forward(self, x):
- res = self.body(x)
- res = self.PA(res)
- res += x
- return res
-
-##########################################################################
-## Channel Attention Layer
-class CALayer(nn.Module):
- def __init__(self, channel, reduction=16, bias=False):
- super(CALayer, self).__init__()
- # global average pooling: feature --> point
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- # feature channel downscale and upscale --> channel weight
- self.conv_du = nn.Sequential(
- nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=bias),
- nn.ReLU(inplace=True),
- nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=bias),
- nn.Sigmoid()
- )
-
- def forward(self, x):
- y = self.avg_pool(x)
- y = self.conv_du(y)
- return x * y
-
-## Channel Attention Block (CAB)
-class CAB(nn.Module):
- def __init__(self, n_feat, kernel_size, reduction, bias, act):
- super(CAB, self).__init__()
- modules_body = [conv(n_feat, n_feat, kernel_size, bias=bias), act, conv(n_feat, n_feat, kernel_size, bias=bias)]
-
- self.CA = CALayer(n_feat, reduction, bias=bias)
- self.body = nn.Sequential(*modules_body)
-
- def forward(self, x):
- res = self.body(x)
- res = self.CA(res)
- res += x
- return res
-
-
-if __name__ == "__main__":
- import time
- from thop import profile
- # layer = CAB(64, 3, 4, False, nn.PReLU())
- layer = PAB(64, 3, 4, False, nn.PReLU())
- # layer = SAB(64, 3, 4, False, nn.PReLU())
- for idx, m in enumerate(layer.modules()):
- print(idx, "-", m)
- s = time.time()
-
- rgb = torch.ones(1, 64, 256, 256, dtype=torch.float, requires_grad=False)
- out = layer(rgb)
- flops, params = profile(layer, inputs=(rgb,))
- print('parameters:', params)
- print('flops', flops)
- print('time: {:.4f}ms'.format((time.time()-s)*10))
\ No newline at end of file
diff --git a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py
deleted file mode 100644
index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/801artistry/RVC801/julius/lowpass.py b/spaces/801artistry/RVC801/julius/lowpass.py
deleted file mode 100644
index 0eb46e382b20bfc2d93482f9f027986b863de6f0..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/julius/lowpass.py
+++ /dev/null
@@ -1,181 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-"""
-FIR windowed sinc lowpass filters.
-"""
-
-import math
-from typing import Sequence, Optional
-
-import torch
-from torch.nn import functional as F
-
-from .core import sinc
-from .fftconv import fft_conv1d
-from .utils import simple_repr
-
-
-class LowPassFilters(torch.nn.Module):
- """
- Bank of low pass filters. Note that a high pass or band pass filter can easily
- be implemented by substracting a same signal processed with low pass filters with different
- frequencies (see `julius.bands.SplitBands` for instance).
- This uses a windowed sinc filter, very similar to the one used in
- `julius.resample`. However, because we do not change the sample rate here,
- this filter can be much more efficiently implemented using the FFT convolution from
- `julius.fftconv`.
-
- Args:
- cutoffs (list[float]): list of cutoff frequencies, in [0, 0.5] expressed as `f/f_s` where
- f_s is the samplerate and `f` is the cutoff frequency.
- The upper limit is 0.5, because a signal sampled at `f_s` contains only
- frequencies under `f_s / 2`.
- stride (int): how much to decimate the output. Keep in mind that decimation
- of the output is only acceptable if the cutoff frequency is under `1/ (2 * stride)`
- of the original sampling rate.
- pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`,
- the output will have the same length as the input.
- zeros (float): Number of zero crossings to keep.
- Controls the receptive field of the Finite Impulse Response filter.
- For lowpass filters with low cutoff frequency, e.g. 40Hz at 44.1kHz,
- it is a bad idea to set this to a high value.
- This is likely appropriate for most use. Lower values
- will result in a faster filter, but with a slower attenuation around the
- cutoff frequency.
- fft (bool or None): if True, uses `julius.fftconv` rather than PyTorch convolutions.
- If False, uses PyTorch convolutions. If None, either one will be chosen automatically
- depending on the effective filter size.
-
-
- ..warning::
- All the filters will use the same filter size, aligned on the lowest
- frequency provided. If you combine a lot of filters with very diverse frequencies, it might
- be more efficient to split them over multiple modules with similar frequencies.
-
- ..note::
- A lowpass with a cutoff frequency of 0 is defined as the null function
- by convention here. This allows for a highpass with a cutoff of 0 to
- be equal to identity, as defined in `julius.filters.HighPassFilters`.
-
- Shape:
-
- - Input: `[*, T]`
- - Output: `[F, *, T']`, with `T'=T` if `pad` is True and `stride` is 1, and
- `F` is the numer of cutoff frequencies.
-
- >>> lowpass = LowPassFilters([1/4])
- >>> x = torch.randn(4, 12, 21, 1024)
- >>> list(lowpass(x).shape)
- [1, 4, 12, 21, 1024]
- """
-
- def __init__(self, cutoffs: Sequence[float], stride: int = 1, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- super().__init__()
- self.cutoffs = list(cutoffs)
- if min(self.cutoffs) < 0:
- raise ValueError("Minimum cutoff must be larger than zero.")
- if max(self.cutoffs) > 0.5:
- raise ValueError("A cutoff above 0.5 does not make sense.")
- self.stride = stride
- self.pad = pad
- self.zeros = zeros
- self.half_size = int(zeros / min([c for c in self.cutoffs if c > 0]) / 2)
- if fft is None:
- fft = self.half_size > 32
- self.fft = fft
- window = torch.hann_window(2 * self.half_size + 1, periodic=False)
- time = torch.arange(-self.half_size, self.half_size + 1)
- filters = []
- for cutoff in cutoffs:
- if cutoff == 0:
- filter_ = torch.zeros_like(time)
- else:
- filter_ = 2 * cutoff * window * sinc(2 * cutoff * math.pi * time)
- # Normalize filter to have sum = 1, otherwise we will have a small leakage
- # of the constant component in the input signal.
- filter_ /= filter_.sum()
- filters.append(filter_)
- self.register_buffer("filters", torch.stack(filters)[:, None])
-
- def forward(self, input):
- shape = list(input.shape)
- input = input.view(-1, 1, shape[-1])
- if self.pad:
- input = F.pad(input, (self.half_size, self.half_size), mode='replicate')
- if self.fft:
- out = fft_conv1d(input, self.filters, stride=self.stride)
- else:
- out = F.conv1d(input, self.filters, stride=self.stride)
- shape.insert(0, len(self.cutoffs))
- shape[-1] = out.shape[-1]
- return out.permute(1, 0, 2).reshape(shape)
-
- def __repr__(self):
- return simple_repr(self)
-
-
-class LowPassFilter(torch.nn.Module):
- """
- Same as `LowPassFilters` but applies a single low pass filter.
-
- Shape:
-
- - Input: `[*, T]`
- - Output: `[*, T']`, with `T'=T` if `pad` is True and `stride` is 1.
-
- >>> lowpass = LowPassFilter(1/4, stride=2)
- >>> x = torch.randn(4, 124)
- >>> list(lowpass(x).shape)
- [4, 62]
- """
-
- def __init__(self, cutoff: float, stride: int = 1, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- super().__init__()
- self._lowpasses = LowPassFilters([cutoff], stride, pad, zeros, fft)
-
- @property
- def cutoff(self):
- return self._lowpasses.cutoffs[0]
-
- @property
- def stride(self):
- return self._lowpasses.stride
-
- @property
- def pad(self):
- return self._lowpasses.pad
-
- @property
- def zeros(self):
- return self._lowpasses.zeros
-
- @property
- def fft(self):
- return self._lowpasses.fft
-
- def forward(self, input):
- return self._lowpasses(input)[0]
-
- def __repr__(self):
- return simple_repr(self)
-
-
-def lowpass_filters(input: torch.Tensor, cutoffs: Sequence[float],
- stride: int = 1, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- """
- Functional version of `LowPassFilters`, refer to this class for more information.
- """
- return LowPassFilters(cutoffs, stride, pad, zeros, fft).to(input)(input)
-
-
-def lowpass_filter(input: torch.Tensor, cutoff: float,
- stride: int = 1, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- """
- Same as `lowpass_filters` but with a single cutoff frequency.
- Output will not have a dimension inserted in the front.
- """
- return lowpass_filters(input, [cutoff], stride, pad, zeros, fft)[0]
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/losses/stft_loss.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/losses/stft_loss.py
deleted file mode 100644
index 74d2aa21ad30ba094c406366e652067462f49cd2..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/losses/stft_loss.py
+++ /dev/null
@@ -1,153 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright 2019 Tomoki Hayashi
-# MIT License (https://opensource.org/licenses/MIT)
-
-"""STFT-based Loss modules."""
-
-import torch
-import torch.nn.functional as F
-
-
-def stft(x, fft_size, hop_size, win_length, window):
- """Perform STFT and convert to magnitude spectrogram.
-
- Args:
- x (Tensor): Input signal tensor (B, T).
- fft_size (int): FFT size.
- hop_size (int): Hop size.
- win_length (int): Window length.
- window (str): Window function type.
-
- Returns:
- Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1).
-
- """
- x_stft = torch.stft(x, fft_size, hop_size, win_length, window)
- real = x_stft[..., 0]
- imag = x_stft[..., 1]
-
- # NOTE(kan-bayashi): clamp is needed to avoid nan or inf
- return torch.sqrt(torch.clamp(real ** 2 + imag ** 2, min=1e-7)).transpose(2, 1)
-
-
-class SpectralConvergengeLoss(torch.nn.Module):
- """Spectral convergence loss module."""
-
- def __init__(self):
- """Initilize spectral convergence loss module."""
- super(SpectralConvergengeLoss, self).__init__()
-
- def forward(self, x_mag, y_mag):
- """Calculate forward propagation.
-
- Args:
- x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins).
- y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins).
-
- Returns:
- Tensor: Spectral convergence loss value.
-
- """
- return torch.norm(y_mag - x_mag, p="fro") / torch.norm(y_mag, p="fro")
-
-
-class LogSTFTMagnitudeLoss(torch.nn.Module):
- """Log STFT magnitude loss module."""
-
- def __init__(self):
- """Initilize los STFT magnitude loss module."""
- super(LogSTFTMagnitudeLoss, self).__init__()
-
- def forward(self, x_mag, y_mag):
- """Calculate forward propagation.
-
- Args:
- x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins).
- y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins).
-
- Returns:
- Tensor: Log STFT magnitude loss value.
-
- """
- return F.l1_loss(torch.log(y_mag), torch.log(x_mag))
-
-
-class STFTLoss(torch.nn.Module):
- """STFT loss module."""
-
- def __init__(self, fft_size=1024, shift_size=120, win_length=600, window="hann_window"):
- """Initialize STFT loss module."""
- super(STFTLoss, self).__init__()
- self.fft_size = fft_size
- self.shift_size = shift_size
- self.win_length = win_length
- self.window = getattr(torch, window)(win_length)
- self.spectral_convergenge_loss = SpectralConvergengeLoss()
- self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss()
-
- def forward(self, x, y):
- """Calculate forward propagation.
-
- Args:
- x (Tensor): Predicted signal (B, T).
- y (Tensor): Groundtruth signal (B, T).
-
- Returns:
- Tensor: Spectral convergence loss value.
- Tensor: Log STFT magnitude loss value.
-
- """
- x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window)
- y_mag = stft(y, self.fft_size, self.shift_size, self.win_length, self.window)
- sc_loss = self.spectral_convergenge_loss(x_mag, y_mag)
- mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag)
-
- return sc_loss, mag_loss
-
-
-class MultiResolutionSTFTLoss(torch.nn.Module):
- """Multi resolution STFT loss module."""
-
- def __init__(self,
- fft_sizes=[1024, 2048, 512],
- hop_sizes=[120, 240, 50],
- win_lengths=[600, 1200, 240],
- window="hann_window"):
- """Initialize Multi resolution STFT loss module.
-
- Args:
- fft_sizes (list): List of FFT sizes.
- hop_sizes (list): List of hop sizes.
- win_lengths (list): List of window lengths.
- window (str): Window function type.
-
- """
- super(MultiResolutionSTFTLoss, self).__init__()
- assert len(fft_sizes) == len(hop_sizes) == len(win_lengths)
- self.stft_losses = torch.nn.ModuleList()
- for fs, ss, wl in zip(fft_sizes, hop_sizes, win_lengths):
- self.stft_losses += [STFTLoss(fs, ss, wl, window)]
-
- def forward(self, x, y):
- """Calculate forward propagation.
-
- Args:
- x (Tensor): Predicted signal (B, T).
- y (Tensor): Groundtruth signal (B, T).
-
- Returns:
- Tensor: Multi resolution spectral convergence loss value.
- Tensor: Multi resolution log STFT magnitude loss value.
-
- """
- sc_loss = 0.0
- mag_loss = 0.0
- for f in self.stft_losses:
- sc_l, mag_l = f(x, y)
- sc_loss += sc_l
- mag_loss += mag_l
- sc_loss /= len(self.stft_losses)
- mag_loss /= len(self.stft_losses)
-
- return sc_loss, mag_loss
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py
deleted file mode 100644
index aaac6df39ec06c2d52b2f0cabf967ab447f9b04a..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py
+++ /dev/null
@@ -1,1262 +0,0 @@
-"""
-wild mixture of
-https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
-https://github.com/CompVis/taming-transformers
--- merci
-"""
-import os
-import torch
-import torch.nn as nn
-import numpy as np
-import pytorch_lightning as pl
-from torch.optim.lr_scheduler import LambdaLR
-from einops import rearrange, repeat
-from contextlib import contextmanager
-from functools import partial
-from tqdm import tqdm
-from torchvision.utils import make_grid
-from pytorch_lightning.utilities.distributed import rank_zero_only
-
-from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
-from ldm.modules.ema import LitEma
-from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
-from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL
-from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from ldm.models.diffusion.ddim import DDIMSampler
-from ldm.models.diffusion.ddpm import DDPM, disabled_train
-from omegaconf import ListConfig
-
-__conditioning_keys__ = {'concat': 'c_concat',
- 'crossattn': 'c_crossattn',
- 'adm': 'y'}
-
-
-class LatentDiffusion_audio(DDPM):
- """main class"""
- def __init__(self,
- first_stage_config,
- cond_stage_config,
- num_timesteps_cond=None,
- mel_dim=80,
- mel_length=848,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- *args, **kwargs):
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs['timesteps']
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = 'concat' if concat_mode else 'crossattn'
- if cond_stage_config == '__is_unconditional__':
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.mel_dim = mel_dim
- self.mel_length = mel_length
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std:
- self.scale_factor = scale_factor
- else:
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
- self.instantiate_first_stage(first_stage_config)
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
- self.bbox_tokenizer = None
-
- self.restarted_from_ckpt = False
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys)
- self.restarted_from_ckpt = True
-
- def make_cond_schedule(self, ):
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
- self.cond_ids[:self.num_timesteps_cond] = ids
-
- @rank_zero_only
- @torch.no_grad()
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
- # only for very first batch
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
- # set rescale weight to 1./std of encodings
- print("### USING STD-RESCALING ###")
- x = super().get_input(batch, self.first_stage_key)
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- del self.scale_factor
- self.register_buffer('scale_factor', 1. / z.flatten().std())
- print(f"setting self.scale_factor to {self.scale_factor}")
- print("### USING STD-RESCALING ###")
-
- def register_schedule(self,
- given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule:
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config):
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != '__is_first_stage__'
- assert config != '__is_unconditional__'
- model = instantiate_from_config(config)
- self.cond_stage_model = model
-
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
- denoise_row = []
- for zd in tqdm(samples, desc=desc):
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
- force_not_quantize=force_no_decoder_quantization))
- n_imgs_per_row = len(denoise_row)
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
- return self.scale_factor * z
-
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
-
- @torch.no_grad()
- def get_unconditional_conditioning(self, batch_size, null_label=None):
- if null_label is not None:
- xc = null_label
- if isinstance(xc, ListConfig):
- xc = list(xc)
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- if hasattr(xc, "to"):
- xc = xc.to(self.device)
- c = self.get_learned_conditioning(xc)
- else:
- if self.cond_stage_key in ["class_label", "cls"]:
- xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device)
- return self.get_learned_conditioning(xc)
- else:
- raise NotImplementedError("todo")
- if isinstance(c, list): # in case the encoder gives us a list
- for i in range(len(c)):
- c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device)
- else:
- c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device)
- return c
-
- def meshgrid(self, h, w):
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
-
- arr = torch.cat([y, x], dim=-1)
- return arr
-
- def delta_border(self, h, w):
- """
- :param h: height
- :param w: width
- :return: normalized distance to image border,
- wtith min distance = 0 at border and max dist = 0.5 at image center
- """
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
- arr = self.meshgrid(h, w) / lower_right_corner
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
- return edge_dist
-
- def get_weighting(self, h, w, Ly, Lx, device):
- weighting = self.delta_border(h, w)
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
- self.split_input_params["clip_max_weight"], )
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
-
- if self.split_input_params["tie_braker"]:
- L_weighting = self.delta_border(Ly, Lx)
- L_weighting = torch.clip(L_weighting,
- self.split_input_params["clip_min_tie_weight"],
- self.split_input_params["clip_max_tie_weight"])
-
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
- weighting = weighting * L_weighting
- return weighting
-
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
- """
- :param x: img of size (bs, c, h, w)
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
- """
- bs, nc, h, w = x.shape
-
- # number of crops in image
- Ly = (h - kernel_size[0]) // stride[0] + 1
- Lx = (w - kernel_size[1]) // stride[1] + 1
-
- if uf == 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
-
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
-
- elif uf > 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
- dilation=1, padding=0,
- stride=(stride[0] * uf, stride[1] * uf))
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
-
- elif df > 1 and uf == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
- dilation=1, padding=0,
- stride=(stride[0] // df, stride[1] // df))
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
-
- else:
- raise NotImplementedError
-
- return fold, unfold, normalization, weighting
-
- @torch.no_grad()
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
- cond_key=None, return_original_cond=False, bs=None):
- x = super().get_input(batch, k)
- if bs is not None:
- x = x[:bs]
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
-
- if self.model.conditioning_key is not None:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ['caption', 'coordinates_bbox']:
- xc = batch[cond_key]
- elif cond_key == 'class_label':
- xc = batch
- else:
- xc = super().get_input(batch, cond_key).to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- # import pudb; pudb.set_trace()
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
- if bs is not None:
- c = c[:bs]
- # Testing #
- if cond_key == 'masked_image':
- mask = super().get_input(batch, "mask")
- cc = torch.nn.functional.interpolate(mask, size=c.shape[-2:]) # [B, 1, 10, 106]
- c = torch.cat((c, cc), dim=1) # [B, 5, 10, 106]
- # Testing #
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- ckey = __conditioning_keys__[self.model.conditioning_key]
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {'pos_x': pos_x, 'pos_y': pos_y}
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_original_cond:
- out.append(xc)
- return out
-
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
-
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- uf = self.split_input_params["vqf"]
- bs, nc, h, w = z.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
-
- z = unfold(z) # (bn, nc * prod(**ks), L)
- # 1. Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- # 2. apply model loop over last dim
- if isinstance(self.first_stage_model, VQModelInterface):
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
- force_not_quantize=predict_cids or force_not_quantize)
- for i in range(z.shape[-1])]
- else:
-
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
- o = o * weighting
- # Reverse 1. reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
- return decoded
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- # same as above but without decorator
- def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
-
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- uf = self.split_input_params["vqf"]
- bs, nc, h, w = z.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
-
- z = unfold(z) # (bn, nc * prod(**ks), L)
- # 1. Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- # 2. apply model loop over last dim
- if isinstance(self.first_stage_model, VQModelInterface):
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
- force_not_quantize=predict_cids or force_not_quantize)
- for i in range(z.shape[-1])]
- else:
-
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
- o = o * weighting
- # Reverse 1. reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
- return decoded
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- @torch.no_grad()
- def encode_first_stage(self, x):
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- df = self.split_input_params["vqf"]
- self.split_input_params['original_image_size'] = x.shape[-2:]
- bs, nc, h, w = x.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df)
- z = unfold(x) # (bn, nc * prod(**ks), L)
- # Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- output_list = [self.first_stage_model.encode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1)
- o = o * weighting
-
- # Reverse reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization
- return decoded
-
- else:
- return self.first_stage_model.encode(x)
- else:
- return self.first_stage_model.encode(x)
-
- def shared_step(self, batch, **kwargs):
- x, c = self.get_input(batch, self.first_stage_key)
- loss = self(x, c)
- return loss
-
- def test_step(self,batch,batch_idx):
- cond = batch[self.cond_stage_key] * self.test_repeat
- cond = self.get_learned_conditioning(cond) # c: string -> [B, T, Context_dim]
- batch_size = len(cond)
- enc_emb = self.sample(cond,batch_size,timesteps=self.test_numsteps)# shape = [batch_size,self.channels,self.mel_dim,self.mel_length]
- xrec = self.decode_first_stage(enc_emb)
- reconstructions = (xrec + 1)/2 # to mel scale
- test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path)
- savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class')
- if not os.path.exists(savedir):
- os.makedirs(savedir)
-
- file_names = batch['f_name']
- nfiles = len(file_names)
- reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim
- for k in range(reconstructions.shape[0]):
- b,repeat = k % nfiles, k // nfiles
- vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num
- v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:]
- save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}_{repeat}.npy')# the num_th caption, the repeat_th repitition
- np.save(save_img_path,reconstructions[b])
-
- return None
-
- def forward(self, x, c, *args, **kwargs):
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- if self.model.conditioning_key is not None:
- assert c is not None
- if self.cond_stage_trainable:
- c = self.get_learned_conditioning(c) # c: string -> [B, T, Context_dim]
- if self.shorten_cond_schedule: # TODO: drop this option
- tc = self.cond_ids[t].to(self.device)
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
- return self.p_losses(x, c, t, *args, **kwargs)
-
- def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset
- def rescale_bbox(bbox):
- x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2])
- y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3])
- w = min(bbox[2] / crop_coordinates[2], 1 - x0)
- h = min(bbox[3] / crop_coordinates[3], 1 - y0)
- return x0, y0, w, h
-
- return [rescale_bbox(b) for b in bboxes]
-
- def apply_model(self, x_noisy, t, cond, return_ids=False):
-
- if isinstance(cond, dict):
- # hybrid case, cond is exptected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond]
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
- cond = {key: cond}
-
- if hasattr(self, "split_input_params"):
- assert len(cond) == 1 # todo can only deal with one conditioning atm
- assert not return_ids
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
-
- h, w = x_noisy.shape[-2:]
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride)
-
- z = unfold(x_noisy) # (bn, nc * prod(**ks), L)
- # Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
- z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])]
-
- if self.cond_stage_key in ["image", "LR_image", "segmentation",
- 'bbox_img'] and self.model.conditioning_key: # todo check for completeness
- c_key = next(iter(cond.keys())) # get key
- c = next(iter(cond.values())) # get value
- assert (len(c) == 1) # todo extend to list with more than one elem
- c = c[0] # get element
-
- c = unfold(c)
- c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
-
- elif self.cond_stage_key == 'coordinates_bbox':
- assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
-
- # assuming padding of unfold is always 0 and its dilation is always 1
- n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
- full_img_h, full_img_w = self.split_input_params['original_image_size']
- # as we are operating on latents, we need the factor from the original image size to the
- # spatial latent size to properly rescale the crops for regenerating the bbox annotations
- num_downs = self.first_stage_model.encoder.num_resolutions - 1
- rescale_latent = 2 ** (num_downs)
-
- # get top left postions of patches as conforming for the bbbox tokenizer, therefore we
- # need to rescale the tl patch coordinates to be in between (0,1)
- tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
- rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
- for patch_nr in range(z.shape[-1])]
-
- # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w)
- patch_limits = [(x_tl, y_tl,
- rescale_latent * ks[0] / full_img_w,
- rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates]
- # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates]
-
- # tokenize crop coordinates for the bounding boxes of the respective patches
- patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device)
- for bbox in patch_limits] # list of length l with tensors of shape (1, 2)
- print(patch_limits_tknzd[0].shape)
- # cut tknzd crop position from conditioning
- assert isinstance(cond, dict), 'cond must be dict to be fed into model'
- cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device)
- print(cut_cond.shape)
-
- adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd])
- adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n')
- print(adapted_cond.shape)
- adapted_cond = self.get_learned_conditioning(adapted_cond)
- print(adapted_cond.shape)
- adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1])
- print(adapted_cond.shape)
-
- cond_list = [{'c_crossattn': [e]} for e in adapted_cond]
-
- else:
- cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient
-
- # apply model by loop over crops
- output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
- assert not isinstance(output_list[0],
- tuple) # todo cant deal with multiple model outputs check this never happens
-
- o = torch.stack(output_list, axis=-1)
- o = o * weighting
- # Reverse reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- x_recon = fold(o) / normalization
-
- else:
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
-
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
-
- def _prior_bpd(self, x_start):
- """
- Get the prior KL term for the variational lower-bound, measured in
- bits-per-dim.
- This term can't be optimized, as it only depends on the encoder.
- :param x_start: the [N x C x ...] tensor of inputs.
- :return: a batch of [N] KL values (in bits), one per batch element.
- """
- batch_size = x_start.shape[0]
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
- return mean_flat(kl_prior) / np.log(2.0)
-
- def p_losses(self, x_start, cond, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_output = self.apply_model(x_noisy, t, cond)
-
- loss_dict = {}
- prefix = 'train' if self.training else 'val'
-
- if self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "eps":
- target = noise
- else:
- raise NotImplementedError()
-
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
-
- logvar_t = self.logvar[t].to(self.device)
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
- if self.learn_logvar:
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
- loss_dict.update({'logvar': self.logvar.data.mean()})
-
- loss = self.l_simple_weight * loss.mean()
-
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
- loss += (self.original_elbo_weight * loss_vlb)
- loss_dict.update({f'{prefix}/loss': loss})
-
- return loss, loss_dict
-
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
- return_x0=False, score_corrector=None, corrector_kwargs=None):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
- log_every_t=None):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
- total=timesteps) if verbose else reversed(
- range(0, timesteps))
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised, return_x0=True,
- temperature=temperature[i], noise_dropout=noise_dropout,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(self, cond, shape, return_intermediates=False,
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, start_T=None,
- log_every_t=None):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
- range(0, timesteps))
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised)
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
- verbose=True, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, shape=None,**kwargs):
- if shape is None:
- shape = (batch_size, self.channels, self.mel_dim, self.mel_length)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
- return self.p_sample_loop(cond,
- shape,
- return_intermediates=return_intermediates, x_T=x_T,
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
- mask=mask, x0=x0)
-
- @torch.no_grad()
- def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs):
-
- if ddim:
- ddim_sampler = DDIMSampler(self)
- shape = (self.channels, self.mel_dim, self.mel_length)
- samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size,
- shape,cond,verbose=False,**kwargs)
-
- else:
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
- return_intermediates=True,**kwargs)
-
- return samples, intermediates
-
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, **kwargs):
-
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
- return_first_stage_outputs=True,
- force_c_encode=True,
- return_original_cond=True,
- bs=N)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode") and self.cond_stage_key != "masked_image":
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key == "masked_image":
- log["mask"] = c[:, -1, :, :][:, None, :, :]
- xc = self.cond_stage_model.decode(c[:, :self.cond_stage_model.embed_dim, :, :])
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption"]:
- xc = log_txt_as_img((256, 256), batch["caption"])
- log["conditioning"] = xc
- elif self.cond_stage_key == 'class_label':
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
- ddim_steps=ddim_steps,eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
- self.first_stage_model, IdentityFirstStage):
- # also display when quantizing x0 while sampling
- with self.ema_scope("Plotting Quantized Denoised"):
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
- ddim_steps=ddim_steps,eta=ddim_eta,
- quantize_denoised=True)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
- # quantize_denoised=True)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_x0_quantized"] = x_samples
-
- if inpaint:
- # make a simple center square
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
- mask = torch.ones(N, h, w).to(self.device)
- # zeros will be filled in
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
- mask = mask[:, None, ...]
- with self.ema_scope("Plotting Inpaint"):
-
- samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_inpainting"] = x_samples
- log["mask_inpainting"] = mask
-
- # outpaint
- mask = 1 - mask
- with self.ema_scope("Plotting Outpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_outpainting"] = x_samples
- log["mask_outpainting"] = mask
-
- if plot_progressive_rows:
- with self.ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.mel_dim, self.mel_length),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.cond_stage_trainable:
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
- params = params + list(self.cond_stage_model.parameters())
- if self.learn_logvar:
- print('Diffusion model optimizing logvar')
- params.append(self.logvar)
- opt = torch.optim.AdamW(params, lr=lr)
- if self.use_scheduler:
- assert 'target' in self.scheduler_config
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [opt], scheduler
- return opt
-
- @torch.no_grad()
- def to_rgb(self, x):
- x = x.float()
- if not hasattr(self, "colorize"):
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
- x = nn.functional.conv2d(x, weight=self.colorize)
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
- return x
-
-
-class LatentFinetuneDiffusion(LatentDiffusion_audio):
- """
- Basis for different finetunas, such as inpainting or depth2image
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys: tuple,
- finetune_keys=("model.diffusion_model.input_blocks.0.0.weight",
- "model_ema.diffusion_modelinput_blocks00weight"
- ),
- keep_finetune_dims=4,
- # if model was trained without concat mode before and we would like to keep these channels
- c_concat_log_start=None, # to log reconstruction of c_concat codes
- c_concat_log_end=None,
- *args, **kwargs
- ):
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", list())
- super().__init__(*args, **kwargs)
- self.finetune_keys = finetune_keys
- self.concat_keys = concat_keys
- self.keep_dims = keep_finetune_dims
- self.c_concat_log_start = c_concat_log_start
- self.c_concat_log_end = c_concat_log_end
-
- if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint'
- if exists(ckpt_path):
- self.init_from_ckpt(ckpt_path, ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
-
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
-
- # make it explicit, finetune by including extra input channels
- if exists(self.finetune_keys) and k in self.finetune_keys:
- new_entry = None
- for name, param in self.named_parameters():
- if name in self.finetune_keys:
- print(
- f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only")
- new_entry = torch.zeros_like(param) # zero init
- assert exists(new_entry), 'did not find matching parameter to modify'
- new_entry[:, :self.keep_dims, ...] = sd[k]
- sd[k] = new_entry
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True)
- c_cat, c = c["c_concat"][0], c["c_crossattn"][0]
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"])
- log["conditioning"] = xc
- elif self.cond_stage_key == 'class_label':
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if not (self.c_concat_log_start is None and self.c_concat_log_end is None):
- log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end])
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with self.ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- uc_cat = c_cat
- uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
- with self.ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc_full,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- return log
-
-
-class LatentInpaintDiffusion(LatentFinetuneDiffusion):
- """
- can either run as pure inpainting model (only concat mode) or with mixed conditionings,
- e.g. mask as concat and text via cross-attn.
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys=("mask", "masked_image"),
- masked_image_key="masked_image",
- *args, **kwargs
- ):
- super().__init__(concat_keys, *args, **kwargs)
- self.masked_image_key = masked_image_key
- assert self.masked_image_key in concat_keys
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- c_cat = list()
- for ck in self.concat_keys:
- if len(batch[ck].shape) == 3:
- batch[ck] = batch[ck][..., None]
- cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- bchw = z.shape
- if ck != self.masked_image_key:
- cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])
- else:
- cc = self.get_first_stage_encoding(self.encode_first_stage(cc))
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs)
- log["masked_image"] = rearrange(args[0]["masked_image"],
- 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- return log
diff --git a/spaces/AgentVerse/agentVerse/agentverse/logging.py b/spaces/AgentVerse/agentVerse/agentverse/logging.py
deleted file mode 100644
index 9ed68d6f2b2c7f5d54bcfaa698b6627008932ccc..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/logging.py
+++ /dev/null
@@ -1,291 +0,0 @@
-"""Logging module for Auto-GPT."""
-import logging
-import os
-import random
-import re
-import time
-import json
-import abc
-from logging import LogRecord
-from typing import Any, List
-
-from colorama import Fore, Style
-from agentverse.utils import Singleton
-
-
-# from autogpt.speech import say_text
-class JsonFileHandler(logging.FileHandler):
- def __init__(self, filename, mode="a", encoding=None, delay=False):
- super().__init__(filename, mode, encoding, delay)
-
- def emit(self, record):
- json_data = json.loads(self.format(record))
- with open(self.baseFilename, "w", encoding="utf-8") as f:
- json.dump(json_data, f, ensure_ascii=False, indent=4)
-
-
-class JsonFormatter(logging.Formatter):
- def format(self, record):
- return record.msg
-
-
-class Logger(metaclass=Singleton):
- """
- Logger that handle titles in different colors.
- Outputs logs in console, activity.log, and errors.log
- For console handler: simulates typing
- """
-
- def __init__(self):
- # create log directory if it doesn't exist
- this_files_dir_path = os.path.dirname(__file__)
- log_dir = os.path.join(this_files_dir_path, "../logs")
- if not os.path.exists(log_dir):
- os.makedirs(log_dir)
-
- log_file = "activity.log"
- error_file = "error.log"
-
- console_formatter = AutoGptFormatter("%(title_color)s %(message)s")
-
- # Create a handler for console which simulate typing
- self.typing_console_handler = TypingConsoleHandler()
- self.typing_console_handler.setLevel(logging.INFO)
- self.typing_console_handler.setFormatter(console_formatter)
-
- # Create a handler for console without typing simulation
- self.console_handler = ConsoleHandler()
- self.console_handler.setLevel(logging.DEBUG)
- self.console_handler.setFormatter(console_formatter)
-
- # Info handler in activity.log
- self.file_handler = logging.FileHandler(
- os.path.join(log_dir, log_file), "a", "utf-8"
- )
- self.file_handler.setLevel(logging.DEBUG)
- info_formatter = AutoGptFormatter(
- "%(asctime)s %(levelname)s %(title)s %(message_no_color)s"
- )
- self.file_handler.setFormatter(info_formatter)
-
- # Error handler error.log
- error_handler = logging.FileHandler(
- os.path.join(log_dir, error_file), "a", "utf-8"
- )
- error_handler.setLevel(logging.ERROR)
- error_formatter = AutoGptFormatter(
- "%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s"
- " %(message_no_color)s"
- )
- error_handler.setFormatter(error_formatter)
-
- self.typing_logger = logging.getLogger("TYPER")
- self.typing_logger.addHandler(self.typing_console_handler)
- self.typing_logger.addHandler(self.file_handler)
- self.typing_logger.addHandler(error_handler)
- self.typing_logger.setLevel(logging.DEBUG)
-
- self.logger = logging.getLogger("LOGGER")
- self.logger.addHandler(self.console_handler)
- self.logger.addHandler(self.file_handler)
- self.logger.addHandler(error_handler)
- self.logger.setLevel(logging.DEBUG)
-
- self.json_logger = logging.getLogger("JSON_LOGGER")
- self.json_logger.addHandler(self.file_handler)
- self.json_logger.addHandler(error_handler)
- self.json_logger.setLevel(logging.DEBUG)
-
- self.speak_mode = False
- self.chat_plugins = []
-
- def typewriter_log(
- self, title="", title_color="", content="", speak_text=False, level=logging.INFO
- ):
- # if speak_text and self.speak_mode:
- # say_text(f"{title}. {content}")
-
- for plugin in self.chat_plugins:
- plugin.report(f"{title}. {content}")
-
- if content:
- if isinstance(content, list):
- content = "\n".join(content)
- else:
- content = ""
-
- self.typing_logger.log(
- level, content, extra={"title": title, "color": title_color}
- )
-
- def debug(
- self,
- message,
- title="",
- title_color="",
- ):
- self._log(title, title_color, message, logging.DEBUG)
-
- def info(
- self,
- message,
- title="",
- title_color="",
- ):
- self._log(title, title_color, message, logging.INFO)
-
- def warn(
- self,
- message,
- title="",
- title_color="",
- ):
- self._log(title, title_color, message, logging.WARN)
-
- def error(self, title, message=""):
- self._log(title, Fore.RED, message, logging.ERROR)
-
- def _log(
- self,
- title: str = "",
- title_color: str = "",
- message: str = "",
- level=logging.INFO,
- ):
- if isinstance(message, list):
- if len(message) > 0:
- message = "\n".join([str(m) for m in message])
- else:
- message = ""
- self.logger.log(
- level, message, extra={"title": str(title), "color": str(title_color)}
- )
-
- def set_level(self, level):
- self.logger.setLevel(level)
- self.typing_logger.setLevel(level)
-
- def double_check(self, additionalText=None):
- if not additionalText:
- additionalText = (
- "Please ensure you've setup and configured everything"
- " correctly. Read https://github.com/Torantulino/Auto-GPT#readme to "
- "double check. You can also create a github issue or join the discord"
- " and ask there!"
- )
-
- self.typewriter_log("DOUBLE CHECK CONFIGURATION", Fore.YELLOW, additionalText)
-
- def log_json(self, data: Any, file_name: str) -> None:
- # Define log directory
- this_files_dir_path = os.path.dirname(__file__)
- log_dir = os.path.join(this_files_dir_path, "../logs")
-
- # Create a handler for JSON files
- json_file_path = os.path.join(log_dir, file_name)
- json_data_handler = JsonFileHandler(json_file_path)
- json_data_handler.setFormatter(JsonFormatter())
-
- # Log the JSON data using the custom file handler
- self.json_logger.addHandler(json_data_handler)
- self.json_logger.debug(data)
- self.json_logger.removeHandler(json_data_handler)
-
- def log_prompt(self, prompt: List[dict]) -> None:
- self.debug("", "-=-=-=-=-=-=-=-=Prompt Start-=-=-=-=-=-=-=-=", Fore.MAGENTA)
- for p in prompt:
- self.debug(
- p["content"]
- if "function_call" not in p
- else p["content"]
- + "\nFunction Call:\n"
- + json.dumps(p["function_call"]),
- title=f'==={p["role"]}===\n',
- title_color=Fore.MAGENTA,
- )
- self.debug("", "-=-=-=-=-=-=-=-=Prompt End-=-=-=-=-=-=-=-=", Fore.MAGENTA)
-
- def get_log_directory(self):
- this_files_dir_path = os.path.dirname(__file__)
- log_dir = os.path.join(this_files_dir_path, "../logs")
- return os.path.abspath(log_dir)
-
-
-"""
-Output stream to console using simulated typing
-"""
-
-
-class TypingConsoleHandler(logging.StreamHandler):
- def emit(self, record):
- min_typing_speed = 0.05
- max_typing_speed = 0.01
-
- msg = self.format(record)
- try:
- words = re.split(r"(\s+)", msg)
- for i, word in enumerate(words):
- print(word, end="", flush=True)
- # if i < len(words) - 1:
- # print(" ", end="", flush=True)
- typing_speed = random.uniform(min_typing_speed, max_typing_speed)
- time.sleep(typing_speed)
- # type faster after each word
- min_typing_speed = min_typing_speed * 0.95
- max_typing_speed = max_typing_speed * 0.95
- print()
- except Exception:
- self.handleError(record)
-
-
-class ConsoleHandler(logging.StreamHandler):
- def emit(self, record) -> None:
- msg = self.format(record)
- try:
- print(msg)
- except Exception:
- self.handleError(record)
-
-
-class AutoGptFormatter(logging.Formatter):
- """
- Allows to handle custom placeholders 'title_color' and 'message_no_color'.
- To use this formatter, make sure to pass 'color', 'title' as log extras.
- """
-
- def format(self, record: LogRecord) -> str:
- if hasattr(record, "color"):
- record.title_color = (
- getattr(record, "color")
- + getattr(record, "title", "")
- + " "
- + Style.RESET_ALL
- )
- else:
- record.title_color = getattr(record, "title", "")
-
- # Add this line to set 'title' to an empty string if it doesn't exist
- record.title = getattr(record, "title", "")
-
- if hasattr(record, "msg"):
- record.message_no_color = remove_color_codes(getattr(record, "msg"))
- else:
- record.message_no_color = ""
- return super().format(record)
-
-
-def remove_color_codes(s: str) -> str:
- ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
- return ansi_escape.sub("", s)
-
-
-logger = Logger()
-
-
-def get_logger():
- return logger
-
-
-def typewriter_log(content="", color="", level=logging.INFO):
- for line in content.split("\n"):
- logger.typewriter_log(line, title_color=color, level=level)
diff --git a/spaces/AhmedBadrDev/stomach/README.md b/spaces/AhmedBadrDev/stomach/README.md
deleted file mode 100644
index 441ceb944a403d7039c48c68dd661dcd9536257c..0000000000000000000000000000000000000000
--- a/spaces/AhmedBadrDev/stomach/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stomach
-emoji: 🌍
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GetCode.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GetCode.py
deleted file mode 100644
index 62e64dc8cbc5ad2bb16aef5da8f6d41c26b24170..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GetCode.py
+++ /dev/null
@@ -1,232 +0,0 @@
-
-
-
-import os
-import pickle
-import numpy as np
-from dnnlib import tflib
-import tensorflow as tf
-
-import argparse
-
-def LoadModel(dataset_name):
- # Initialize TensorFlow.
- tflib.init_tf()
- model_path='./model/'
- model_name=dataset_name+'.pkl'
-
- tmp=os.path.join(model_path,model_name)
- with open(tmp, 'rb') as f:
- _, _, Gs = pickle.load(f)
- return Gs
-
-def lerp(a,b,t):
- return a + (b - a) * t
-
-#stylegan-ada
-def SelectName(layer_name,suffix):
- if suffix==None:
- tmp1='add:0' in layer_name
- tmp2='shape=(?,' in layer_name
- tmp4='G_synthesis_1' in layer_name
- tmp= tmp1 and tmp2 and tmp4
- else:
- tmp1=('/Conv0_up'+suffix) in layer_name
- tmp2=('/Conv1'+suffix) in layer_name
- tmp3=('4x4/Conv'+suffix) in layer_name
- tmp4='G_synthesis_1' in layer_name
- tmp5=('/ToRGB'+suffix) in layer_name
- tmp= (tmp1 or tmp2 or tmp3 or tmp5) and tmp4
- return tmp
-
-
-def GetSNames(suffix):
- #get style tensor name
- with tf.Session() as sess:
- op = sess.graph.get_operations()
- layers=[m.values() for m in op]
-
-
- select_layers=[]
- for layer in layers:
- layer_name=str(layer)
- if SelectName(layer_name,suffix):
- select_layers.append(layer[0])
- return select_layers
-
-def SelectName2(layer_name):
- tmp1='mod_bias' in layer_name
- tmp2='mod_weight' in layer_name
- tmp3='ToRGB' in layer_name
-
- tmp= (tmp1 or tmp2) and (not tmp3)
- return tmp
-
-def GetKName(Gs):
-
- layers=[var for name, var in Gs.components.synthesis.vars.items()]
-
- select_layers=[]
- for layer in layers:
- layer_name=str(layer)
- if SelectName2(layer_name):
- select_layers.append(layer)
- return select_layers
-
-def GetCode(Gs,random_state,num_img,num_once,dataset_name):
- rnd = np.random.RandomState(random_state) #5
-
- truncation_psi=0.7
- truncation_cutoff=8
-
- dlatent_avg=Gs.get_var('dlatent_avg')
-
- dlatents=np.zeros((num_img,512),dtype='float32')
- for i in range(int(num_img/num_once)):
- src_latents = rnd.randn(num_once, Gs.input_shape[1])
- src_dlatents = Gs.components.mapping.run(src_latents, None) # [seed, layer, component]
-
- # Apply truncation trick.
- if truncation_psi is not None and truncation_cutoff is not None:
- layer_idx = np.arange(src_dlatents.shape[1])[np.newaxis, :, np.newaxis]
- ones = np.ones(layer_idx.shape, dtype=np.float32)
- coefs = np.where(layer_idx < truncation_cutoff, truncation_psi * ones, ones)
- src_dlatents_np=lerp(dlatent_avg, src_dlatents, coefs)
- src_dlatents=src_dlatents_np[:,0,:].astype('float32')
- dlatents[(i*num_once):((i+1)*num_once),:]=src_dlatents
- print('get all z and w')
-
- tmp='./npy/'+dataset_name+'/W'
- np.save(tmp,dlatents)
-
-
-def GetImg(Gs,num_img,num_once,dataset_name,save_name='images'):
- print('Generate Image')
- tmp='./npy/'+dataset_name+'/W.npy'
- dlatents=np.load(tmp)
- fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
-
- all_images=[]
- for i in range(int(num_img/num_once)):
- print(i)
- images=[]
- for k in range(num_once):
- tmp=dlatents[i*num_once+k]
- tmp=tmp[None,None,:]
- tmp=np.tile(tmp,(1,Gs.components.synthesis.input_shape[1],1))
- image2= Gs.components.synthesis.run(tmp, randomize_noise=False, output_transform=fmt)
- images.append(image2)
-
- images=np.concatenate(images)
-
- all_images.append(images)
-
- all_images=np.concatenate(all_images)
-
- tmp='./npy/'+dataset_name+'/'+save_name
- np.save(tmp,all_images)
-
-def GetS(dataset_name,num_img):
- print('Generate S')
- tmp='./npy/'+dataset_name+'/W.npy'
- dlatents=np.load(tmp)[:num_img]
-
- with tf.Session() as sess:
- init = tf.global_variables_initializer()
- sess.run(init)
-
- Gs=LoadModel(dataset_name)
- Gs.print_layers() #for ada
- select_layers1=GetSNames(suffix=None) #None,'/mul_1:0','/mod_weight/read:0','/MatMul:0'
- dlatents=dlatents[:,None,:]
- dlatents=np.tile(dlatents,(1,Gs.components.synthesis.input_shape[1],1))
-
- all_s = sess.run(
- select_layers1,
- feed_dict={'G_synthesis_1/dlatents_in:0': dlatents})
-
- layer_names=[layer.name for layer in select_layers1]
- save_tmp=[layer_names,all_s]
- return save_tmp
-
-
-
-
-def convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False):
- """Convert a minibatch of images from float32 to uint8 with configurable dynamic range.
- Can be used as an output transformation for Network.run().
- """
- if nchw_to_nhwc:
- images = np.transpose(images, [0, 2, 3, 1])
-
- scale = 255 / (drange[1] - drange[0])
- images = images * scale + (0.5 - drange[0] * scale)
-
- np.clip(images, 0, 255, out=images)
- images=images.astype('uint8')
- return images
-
-
-def GetCodeMS(dlatents):
- m=[]
- std=[]
- for i in range(len(dlatents)):
- tmp= dlatents[i]
- tmp_mean=tmp.mean(axis=0)
- tmp_std=tmp.std(axis=0)
- m.append(tmp_mean)
- std.append(tmp_std)
- return m,std
-
-
-
-#%%
-if __name__ == "__main__":
-
-
- parser = argparse.ArgumentParser(description='Process some integers.')
-
- parser.add_argument('--dataset_name',type=str,default='ffhq',
- help='name of dataset, for example, ffhq')
- parser.add_argument('--code_type',choices=['w','s','s_mean_std'],default='w')
-
- args = parser.parse_args()
- random_state=5
- num_img=100_000
- num_once=1_000
- dataset_name=args.dataset_name
-
- if not os.path.isfile('./model/'+dataset_name+'.pkl'):
- url='https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/'
- name='stylegan2-'+dataset_name+'-config-f.pkl'
- os.system('wget ' +url+name + ' -P ./model/')
- os.system('mv ./model/'+name+' ./model/'+dataset_name+'.pkl')
-
- if not os.path.isdir('./npy/'+dataset_name):
- os.system('mkdir ./npy/'+dataset_name)
-
- if args.code_type=='w':
- Gs=LoadModel(dataset_name=dataset_name)
- GetCode(Gs,random_state,num_img,num_once,dataset_name)
-# GetImg(Gs,num_img=num_img,num_once=num_once,dataset_name=dataset_name,save_name='images_100K') #no need
- elif args.code_type=='s':
- save_name='S'
- save_tmp=GetS(dataset_name,num_img=2_000)
- tmp='./npy/'+dataset_name+'/'+save_name
- with open(tmp, "wb") as fp:
- pickle.dump(save_tmp, fp)
-
- elif args.code_type=='s_mean_std':
- save_tmp=GetS(dataset_name,num_img=num_img)
- dlatents=save_tmp[1]
- m,std=GetCodeMS(dlatents)
- save_tmp=[m,std]
- save_name='S_mean_std'
- tmp='./npy/'+dataset_name+'/'+save_name
- with open(tmp, "wb") as fp:
- pickle.dump(save_tmp, fp)
-
-
-
-
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/safety_checker_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/safety_checker_flax.py
deleted file mode 100644
index 3a8c3167954016b3b89f16caf8348661cd3a27ef..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/safety_checker_flax.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Optional, Tuple
-
-import jax
-import jax.numpy as jnp
-from flax import linen as nn
-from flax.core.frozen_dict import FrozenDict
-from transformers import CLIPConfig, FlaxPreTrainedModel
-from transformers.models.clip.modeling_flax_clip import FlaxCLIPVisionModule
-
-
-def jax_cosine_distance(emb_1, emb_2, eps=1e-12):
- norm_emb_1 = jnp.divide(emb_1.T, jnp.clip(jnp.linalg.norm(emb_1, axis=1), a_min=eps)).T
- norm_emb_2 = jnp.divide(emb_2.T, jnp.clip(jnp.linalg.norm(emb_2, axis=1), a_min=eps)).T
- return jnp.matmul(norm_emb_1, norm_emb_2.T)
-
-
-class FlaxStableDiffusionSafetyCheckerModule(nn.Module):
- config: CLIPConfig
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.vision_model = FlaxCLIPVisionModule(self.config.vision_config)
- self.visual_projection = nn.Dense(self.config.projection_dim, use_bias=False, dtype=self.dtype)
-
- self.concept_embeds = self.param("concept_embeds", jax.nn.initializers.ones, (17, self.config.projection_dim))
- self.special_care_embeds = self.param(
- "special_care_embeds", jax.nn.initializers.ones, (3, self.config.projection_dim)
- )
-
- self.concept_embeds_weights = self.param("concept_embeds_weights", jax.nn.initializers.ones, (17,))
- self.special_care_embeds_weights = self.param("special_care_embeds_weights", jax.nn.initializers.ones, (3,))
-
- def __call__(self, clip_input):
- pooled_output = self.vision_model(clip_input)[1]
- image_embeds = self.visual_projection(pooled_output)
-
- special_cos_dist = jax_cosine_distance(image_embeds, self.special_care_embeds)
- cos_dist = jax_cosine_distance(image_embeds, self.concept_embeds)
-
- # increase this value to create a stronger `nfsw` filter
- # at the cost of increasing the possibility of filtering benign image inputs
- adjustment = 0.0
-
- special_scores = special_cos_dist - self.special_care_embeds_weights[None, :] + adjustment
- special_scores = jnp.round(special_scores, 3)
- is_special_care = jnp.any(special_scores > 0, axis=1, keepdims=True)
- # Use a lower threshold if an image has any special care concept
- special_adjustment = is_special_care * 0.01
-
- concept_scores = cos_dist - self.concept_embeds_weights[None, :] + special_adjustment
- concept_scores = jnp.round(concept_scores, 3)
- has_nsfw_concepts = jnp.any(concept_scores > 0, axis=1)
-
- return has_nsfw_concepts
-
-
-class FlaxStableDiffusionSafetyChecker(FlaxPreTrainedModel):
- config_class = CLIPConfig
- main_input_name = "clip_input"
- module_class = FlaxStableDiffusionSafetyCheckerModule
-
- def __init__(
- self,
- config: CLIPConfig,
- input_shape: Optional[Tuple] = None,
- seed: int = 0,
- dtype: jnp.dtype = jnp.float32,
- _do_init: bool = True,
- **kwargs,
- ):
- if input_shape is None:
- input_shape = (1, 224, 224, 3)
- module = self.module_class(config=config, dtype=dtype, **kwargs)
- super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init)
-
- def init_weights(self, rng: jax.random.KeyArray, input_shape: Tuple, params: FrozenDict = None) -> FrozenDict:
- # init input tensor
- clip_input = jax.random.normal(rng, input_shape)
-
- params_rng, dropout_rng = jax.random.split(rng)
- rngs = {"params": params_rng, "dropout": dropout_rng}
-
- random_params = self.module.init(rngs, clip_input)["params"]
-
- return random_params
-
- def __call__(
- self,
- clip_input,
- params: dict = None,
- ):
- clip_input = jnp.transpose(clip_input, (0, 2, 3, 1))
-
- return self.module.apply(
- {"params": params or self.params},
- jnp.array(clip_input, dtype=jnp.float32),
- rngs={},
- )
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/fast_rcnn_r50_fpn.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/fast_rcnn_r50_fpn.py
deleted file mode 100644
index 1099165b2a7a7af5cee60cf757ef674e768c6a8a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/fast_rcnn_r50_fpn.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# model settings
-model = dict(
- type='FastRCNN',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5),
- roi_head=dict(
- type='StandardRoIHead',
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=dict(
- type='Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rcnn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=False,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100)))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 1bf6780f2c821052692ddcb904bd10e6256c1e71..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_r50-d8_512x512_160k_ade20k.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes.py
deleted file mode 100644
index 923731f74f80c11e196f6099b1c84875686cd441..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './ocrnet_hr18_512x1024_40k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w18_small',
- backbone=dict(
- extra=dict(
- stage1=dict(num_blocks=(2, )),
- stage2=dict(num_blocks=(2, 2)),
- stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
- stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
diff --git a/spaces/Anni123/AuRoRA/retrieval_utils.py b/spaces/Anni123/AuRoRA/retrieval_utils.py
deleted file mode 100644
index 76306636afe2740ad5d85acf117c3c8ce34b6d84..0000000000000000000000000000000000000000
--- a/spaces/Anni123/AuRoRA/retrieval_utils.py
+++ /dev/null
@@ -1,248 +0,0 @@
-'''
-Modified from https://github.com/RuochenZhao/Verify-and-Edit
-'''
-
-import wikipedia
-import wikipediaapi
-import spacy
-import numpy as np
-import ngram
-#import nltk
-import torch
-import sklearn
-#from textblob import TextBlob
-from nltk import tokenize
-from sentence_transformers import SentenceTransformer
-from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer, DPRContextEncoder, DPRContextEncoderTokenizer
-from llm_utils import decoder_for_gpt3
-from utils import entity_cleansing, knowledge_cleansing
-import nltk
-nltk.download('punkt')
-
-wiki_wiki = wikipediaapi.Wikipedia('en')
-nlp = spacy.load("en_core_web_sm")
-ENT_TYPE = ['EVENT', 'FAC', 'GPE', 'LANGUAGE', 'LAW', 'LOC', 'NORP', 'ORG', 'PERSON', 'PRODUCT', 'WORK_OF_ART']
-
-CTX_ENCODER = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
-CTX_TOKENIZER = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", model_max_length = 512)
-Q_ENCODER = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
-Q_TOKENIZER = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base", model_max_length = 512)
-
-
-## todo: extract entities from ConceptNet
-def find_ents(text, engine):
- doc = nlp(text)
- valid_ents = []
- for ent in doc.ents:
- if ent.label_ in ENT_TYPE:
- valid_ents.append(ent.text)
- #in case entity list is empty: resort to LLM to extract entity
- if valid_ents == []:
- input = "Question: " + "[ " + text + "]\n"
- input += "Output the entities in Question separated by comma: "
- response = decoder_for_gpt3(input, 32, engine=engine)
- valid_ents = entity_cleansing(response)
- return valid_ents
-
-
-def relevant_pages_for_ents(valid_ents, topk = 5):
- '''
- Input: a list of valid entities
- Output: a list of list containing topk pages for each entity
- '''
- if valid_ents == []:
- return []
- titles = []
- for ve in valid_ents:
- title = wikipedia.search(ve)[:topk]
- titles.append(title)
- #titles = list(dict.fromkeys(titles))
- return titles
-
-
-def relevant_pages_for_text(text, topk = 5):
- return wikipedia.search(text)[:topk]
-
-
-def get_wiki_objs(pages):
- '''
- Input: a list of list
- Output: a list of list
- '''
- if pages == []:
- return []
- obj_pages = []
- for titles_for_ve in pages:
- pages_for_ve = [wiki_wiki.page(title) for title in titles_for_ve]
- obj_pages.append(pages_for_ve)
- return obj_pages
-
-
-def get_linked_pages(wiki_pages, topk = 5):
- linked_ents = []
- for wp in wiki_pages:
- linked_ents += list(wp.links.values())
- if topk != -1:
- linked_ents = linked_ents[:topk]
- return linked_ents
-
-
-def get_texts_to_pages(pages, topk = 2):
- '''
- Input: list of list of pages
- Output: list of list of texts
- '''
- total_texts = []
- for ve_pages in pages:
- ve_texts = []
- for p in ve_pages:
- text = p.text
- text = tokenize.sent_tokenize(text)[:topk]
- text = ' '.join(text)
- ve_texts.append(text)
- total_texts.append(ve_texts)
- return total_texts
-
-
-
-def DPR_embeddings(q_encoder, q_tokenizer, question):
- question_embedding = q_tokenizer(question, return_tensors="pt",max_length=5, truncation=True)
- with torch.no_grad():
- try:
- question_embedding = q_encoder(**question_embedding)[0][0]
- except:
- print(question)
- print(question_embedding['input_ids'].size())
- raise Exception('end')
- question_embedding = question_embedding.numpy()
- return question_embedding
-
-def model_embeddings(sentence, model):
- embedding = model.encode([sentence])
- return embedding[0] #should return an array of shape 384
-
-##todo: plus overlap filtering
-def filtering_retrieved_texts(question, ent_texts, retr_method="wikipedia_dpr", topk=1):
- filtered_texts = []
- for texts in ent_texts:
- if texts != []: #not empty list
- if retr_method == "ngram":
- pars = np.array([ngram.NGram.compare(question, sent, N=1) for sent in texts])
- #argsort: smallest to biggest
- pars = pars.argsort()[::-1][:topk]
- else:
- if retr_method == "wikipedia_dpr":
- sen_embeds = [DPR_embeddings(Q_ENCODER, Q_TOKENIZER, question)]
- par_embeds = [DPR_embeddings(CTX_ENCODER, CTX_TOKENIZER, s) for s in texts]
- else:
- embedding_model = SentenceTransformer('paraphrase-MiniLM-L6-v2')
- sen_embeds = [model_embeddings(question, embedding_model)]
- par_embeds = [model_embeddings(s, embedding_model) for s in texts]
- pars = sklearn.metrics.pairwise.pairwise_distances(sen_embeds, par_embeds)
- pars = pars.argsort(axis=1)[0][:topk]
- filtered_texts += [texts[i] for i in pars]
- filtered_texts = list(dict.fromkeys(filtered_texts))
- return filtered_texts
-
-def join_knowledge(filtered_texts):
- if filtered_texts == []:
- return ""
- return " ".join(filtered_texts)
-
-def retrieve_for_question_kb(question, engine, know_type="entity_know", no_links=False):
- valid_ents = find_ents(question, engine)
- print(valid_ents)
-
- # find pages
- page_titles = []
- if "entity" in know_type:
- pages_for_ents = relevant_pages_for_ents(valid_ents, topk = 5) #list of list
- if pages_for_ents != []:
- page_titles += pages_for_ents
- if "question" in know_type:
- pages_for_question = relevant_pages_for_text(question, topk = 5)
- if pages_for_question != []:
- page_titles += pages_for_question
- pages = get_wiki_objs(page_titles) #list of list
- if pages == []:
- return ""
- new_pages = []
- assert page_titles != []
- assert pages != []
-
- print(page_titles)
- #print(pages)
- for i, ve_pt in enumerate(page_titles):
- new_ve_pages = []
- for j, pt in enumerate(ve_pt):
- if 'disambiguation' in pt:
- new_ve_pages += get_linked_pages([pages[i][j]], topk=-1)
- else:
- new_ve_pages += [pages[i][j]]
- new_pages.append(new_ve_pages)
-
- pages = new_pages
-
- if not no_links:
- # add linked pages
- for ve_pages in pages:
- ve_pages += get_linked_pages(ve_pages, topk=5)
- ve_pages = list(dict.fromkeys(ve_pages))
- #get texts
- texts = get_texts_to_pages(pages, topk=1)
- filtered_texts = filtering_retrieved_texts(question, texts)
- joint_knowledge = join_knowledge(filtered_texts)
-
-
- return valid_ents, joint_knowledge
-
-def retrieve_for_question(question, engine, retrieve_source="llm_kb"):
- # Retrieve knowledge from LLM
- if "llm" in retrieve_source:
- self_retrieve_prompt = "Question: " + "[ " + question + "]\n"
- self_retrieve_prompt += "Necessary knowledge about the question by not answering the question: "
- self_retrieve_knowledge = decoder_for_gpt3(self_retrieve_prompt, 256, engine=engine)
- self_retrieve_knowledge = knowledge_cleansing(self_retrieve_knowledge)
- print("------Self_Know------")
- print(self_retrieve_knowledge)
-
- # Retrieve knowledge from KB
- if "kb" in retrieve_source:
- entities, kb_retrieve_knowledge = retrieve_for_question_kb(question, engine, no_links=True)
- if kb_retrieve_knowledge != "":
- print("------KB_Know------")
- print(kb_retrieve_knowledge)
-
- return entities, self_retrieve_knowledge, kb_retrieve_knowledge
-
-def refine_for_question(question, engine, self_retrieve_knowledge, kb_retrieve_knowledge, retrieve_source="llm_kb"):
-
- # Refine knowledge
- if retrieve_source == "llm_only":
- refine_knowledge = self_retrieve_knowledge
- elif retrieve_source == "kb_only":
- if kb_retrieve_knowledge != "":
- refine_prompt = "Question: " + "[ " + question + "]\n"
- refine_prompt += "Knowledge: " + "[ " + kb_retrieve_knowledge + "]\n"
- refine_prompt += "Based on Knowledge, output the brief and refined knowledge necessary for Question by not giving the answer: "
- refine_knowledge = decoder_for_gpt3(refine_prompt, 256, engine=engine)
- print("------Refined_Know------")
- print(refine_knowledge)
- else:
- refine_knowledge = ""
- elif retrieve_source == "llm_kb":
- if kb_retrieve_knowledge != "":
- #refine_prompt = "Question: " + "[ " + question + "]\n"
- refine_prompt = "Knowledge_1: " + "[ " + self_retrieve_knowledge + "]\n"
- refine_prompt += "Knowledge_2: " + "[ " + kb_retrieve_knowledge + "]\n"
- #refine_prompt += "By using Knowledge_2 to check Knowledge_1, output the brief and correct knowledge necessary for Question: "
- refine_prompt += "By using Knowledge_2 to check Knowledge_1, output the brief and correct knowledge: "
- refine_knowledge = decoder_for_gpt3(refine_prompt, 256, engine=engine)
- refine_knowledge = knowledge_cleansing(refine_knowledge)
- #refine_knowledge = kb_retrieve_knowledge + refine_knowledge
- print("------Refined_Know------")
- print(refine_knowledge)
- else:
- refine_knowledge = self_retrieve_knowledge
-
- return refine_knowledge
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/utils/fft_pytorch.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/utils/fft_pytorch.py
deleted file mode 100644
index 55075c7fb6e8c539c306cf1a41fa95824850c5ca..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/utils/fft_pytorch.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/python
-#****************************************************************#
-# ScriptName: fft_pytorch.py
-# Author: Anonymous_123
-# Create Date: 2022-08-15 11:33
-# Modify Author: Anonymous_123
-# Modify Date: 2022-08-18 17:46
-# Function:
-#***************************************************************#
-
-import torch
-import torch.nn as nn
-import torch.fft as fft
-import cv2
-import numpy as np
-import torchvision.transforms as transforms
-from PIL import Image
-
-
-def lowpass(input, limit):
- pass1 = torch.abs(fft.rfftfreq(input.shape[-1])) < limit
- pass2 = torch.abs(fft.fftfreq(input.shape[-2])) < limit
- kernel = torch.outer(pass2, pass1)
- fft_input = fft.rfft2(input)
- return fft.irfft2(fft_input*kernel, s=input.shape[-2:])
-
-class HighFrequencyLoss(nn.Module):
- def __init__(self, size=(224,224)):
- super(HighFrequencyLoss, self).__init__()
- '''
- self.h,self.w = size
- self.lpf = torch.zeros((self.h,1))
- R = (self.h+self.w)//8
- for x in range(self.w):
- for y in range(self.h):
- if ((x-(self.w-1)/2)**2 + (y-(self.h-1)/2)**2) < (R**2):
- self.lpf[y,x] = 1
- self.hpf = 1-self.lpf
- '''
-
- def forward(self, x):
- f = fft.fftn(x, dim=(2,3))
- loss = torch.abs(f).mean()
-
- # f = torch.roll(f,(self.h//2,self.w//2),dims=(2,3))
- # f_l = torch.mean(f * self.lpf)
- # f_h = torch.mean(f * self.hpf)
-
- return loss
-
-if __name__ == '__main__':
- import pdb
- pdb.set_trace()
- HF = HighFrequencyLoss()
- transform = transforms.Compose([transforms.ToTensor()])
-
- # img = cv2.imread('test_imgs/ILSVRC2012_val_00001935.JPEG')
- img = cv2.imread('../tmp.jpg')
- H,W,C = img.shape
- imgs = []
- for i in range(10):
- img_ = img[:, 224*i:224*(i+1), :]
- print(img_.shape)
- img_tensor = transform(Image.fromarray(img_[:,:,::-1])).unsqueeze(0)
- loss = HF(img_tensor).item()
- cv2.putText(img_, str(loss)[:6], (5,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 0, 255), 2)
- imgs.append(img_)
-
- cv2.imwrite('tmp.jpg', cv2.hconcat(imgs))
-
-
-
-
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align.py
deleted file mode 100644
index 0755aefc66e67233ceae0f4b77948301c443e9fb..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align.py
+++ /dev/null
@@ -1,223 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-from ..utils import deprecated_api_warning, ext_loader
-
-ext_module = ext_loader.load_ext('_ext',
- ['roi_align_forward', 'roi_align_backward'])
-
-
-class RoIAlignFunction(Function):
-
- @staticmethod
- def symbolic(g, input, rois, output_size, spatial_scale, sampling_ratio,
- pool_mode, aligned):
- from ..onnx import is_custom_op_loaded
- has_custom_op = is_custom_op_loaded()
- if has_custom_op:
- return g.op(
- 'mmcv::MMCVRoiAlign',
- input,
- rois,
- output_height_i=output_size[0],
- output_width_i=output_size[1],
- spatial_scale_f=spatial_scale,
- sampling_ratio_i=sampling_ratio,
- mode_s=pool_mode,
- aligned_i=aligned)
- else:
- from torch.onnx.symbolic_opset9 import sub, squeeze
- from torch.onnx.symbolic_helper import _slice_helper
- from torch.onnx import TensorProtoDataType
- # batch_indices = rois[:, 0].long()
- batch_indices = _slice_helper(
- g, rois, axes=[1], starts=[0], ends=[1])
- batch_indices = squeeze(g, batch_indices, 1)
- batch_indices = g.op(
- 'Cast', batch_indices, to_i=TensorProtoDataType.INT64)
- # rois = rois[:, 1:]
- rois = _slice_helper(g, rois, axes=[1], starts=[1], ends=[5])
- if aligned:
- # rois -= 0.5/spatial_scale
- aligned_offset = g.op(
- 'Constant',
- value_t=torch.tensor([0.5 / spatial_scale],
- dtype=torch.float32))
- rois = sub(g, rois, aligned_offset)
- # roi align
- return g.op(
- 'RoiAlign',
- input,
- rois,
- batch_indices,
- output_height_i=output_size[0],
- output_width_i=output_size[1],
- spatial_scale_f=spatial_scale,
- sampling_ratio_i=max(0, sampling_ratio),
- mode_s=pool_mode)
-
- @staticmethod
- def forward(ctx,
- input,
- rois,
- output_size,
- spatial_scale=1.0,
- sampling_ratio=0,
- pool_mode='avg',
- aligned=True):
- ctx.output_size = _pair(output_size)
- ctx.spatial_scale = spatial_scale
- ctx.sampling_ratio = sampling_ratio
- assert pool_mode in ('max', 'avg')
- ctx.pool_mode = 0 if pool_mode == 'max' else 1
- ctx.aligned = aligned
- ctx.input_shape = input.size()
-
- assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!'
-
- output_shape = (rois.size(0), input.size(1), ctx.output_size[0],
- ctx.output_size[1])
- output = input.new_zeros(output_shape)
- if ctx.pool_mode == 0:
- argmax_y = input.new_zeros(output_shape)
- argmax_x = input.new_zeros(output_shape)
- else:
- argmax_y = input.new_zeros(0)
- argmax_x = input.new_zeros(0)
-
- ext_module.roi_align_forward(
- input,
- rois,
- output,
- argmax_y,
- argmax_x,
- aligned_height=ctx.output_size[0],
- aligned_width=ctx.output_size[1],
- spatial_scale=ctx.spatial_scale,
- sampling_ratio=ctx.sampling_ratio,
- pool_mode=ctx.pool_mode,
- aligned=ctx.aligned)
-
- ctx.save_for_backward(rois, argmax_y, argmax_x)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- rois, argmax_y, argmax_x = ctx.saved_tensors
- grad_input = grad_output.new_zeros(ctx.input_shape)
- # complex head architecture may cause grad_output uncontiguous.
- grad_output = grad_output.contiguous()
- ext_module.roi_align_backward(
- grad_output,
- rois,
- argmax_y,
- argmax_x,
- grad_input,
- aligned_height=ctx.output_size[0],
- aligned_width=ctx.output_size[1],
- spatial_scale=ctx.spatial_scale,
- sampling_ratio=ctx.sampling_ratio,
- pool_mode=ctx.pool_mode,
- aligned=ctx.aligned)
- return grad_input, None, None, None, None, None, None
-
-
-roi_align = RoIAlignFunction.apply
-
-
-class RoIAlign(nn.Module):
- """RoI align pooling layer.
-
- Args:
- output_size (tuple): h, w
- spatial_scale (float): scale the input boxes by this number
- sampling_ratio (int): number of inputs samples to take for each
- output sample. 0 to take samples densely for current models.
- pool_mode (str, 'avg' or 'max'): pooling mode in each bin.
- aligned (bool): if False, use the legacy implementation in
- MMDetection. If True, align the results more perfectly.
- use_torchvision (bool): whether to use roi_align from torchvision.
-
- Note:
- The implementation of RoIAlign when aligned=True is modified from
- https://github.com/facebookresearch/detectron2/
-
- The meaning of aligned=True:
-
- Given a continuous coordinate c, its two neighboring pixel
- indices (in our pixel model) are computed by floor(c - 0.5) and
- ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete
- indices [0] and [1] (which are sampled from the underlying signal
- at continuous coordinates 0.5 and 1.5). But the original roi_align
- (aligned=False) does not subtract the 0.5 when computing
- neighboring pixel indices and therefore it uses pixels with a
- slightly incorrect alignment (relative to our pixel model) when
- performing bilinear interpolation.
-
- With `aligned=True`,
- we first appropriately scale the ROI and then shift it by -0.5
- prior to calling roi_align. This produces the correct neighbors;
-
- The difference does not make a difference to the model's
- performance if ROIAlign is used together with conv layers.
- """
-
- @deprecated_api_warning(
- {
- 'out_size': 'output_size',
- 'sample_num': 'sampling_ratio'
- },
- cls_name='RoIAlign')
- def __init__(self,
- output_size,
- spatial_scale=1.0,
- sampling_ratio=0,
- pool_mode='avg',
- aligned=True,
- use_torchvision=False):
- super(RoIAlign, self).__init__()
-
- self.output_size = _pair(output_size)
- self.spatial_scale = float(spatial_scale)
- self.sampling_ratio = int(sampling_ratio)
- self.pool_mode = pool_mode
- self.aligned = aligned
- self.use_torchvision = use_torchvision
-
- def forward(self, input, rois):
- """
- Args:
- input: NCHW images
- rois: Bx5 boxes. First column is the index into N.\
- The other 4 columns are xyxy.
- """
- if self.use_torchvision:
- from torchvision.ops import roi_align as tv_roi_align
- if 'aligned' in tv_roi_align.__code__.co_varnames:
- return tv_roi_align(input, rois, self.output_size,
- self.spatial_scale, self.sampling_ratio,
- self.aligned)
- else:
- if self.aligned:
- rois -= rois.new_tensor([0.] +
- [0.5 / self.spatial_scale] * 4)
- return tv_roi_align(input, rois, self.output_size,
- self.spatial_scale, self.sampling_ratio)
- else:
- return roi_align(input, rois, self.output_size, self.spatial_scale,
- self.sampling_ratio, self.pool_mode, self.aligned)
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(output_size={self.output_size}, '
- s += f'spatial_scale={self.spatial_scale}, '
- s += f'sampling_ratio={self.sampling_ratio}, '
- s += f'pool_mode={self.pool_mode}, '
- s += f'aligned={self.aligned}, '
- s += f'use_torchvision={self.use_torchvision})'
- return s
diff --git a/spaces/Ariharasudhan/YoloV5/utils/loggers/comet/hpo.py b/spaces/Ariharasudhan/YoloV5/utils/loggers/comet/hpo.py
deleted file mode 100644
index 7dd5c92e8de170222b3cd3eae858f4f3cfddaff6..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/loggers/comet/hpo.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import argparse
-import json
-import logging
-import os
-import sys
-from pathlib import Path
-
-import comet_ml
-
-logger = logging.getLogger(__name__)
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[3] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-from train import train
-from utils.callbacks import Callbacks
-from utils.general import increment_path
-from utils.torch_utils import select_device
-
-# Project Configuration
-config = comet_ml.config.get_config()
-COMET_PROJECT_NAME = config.get_string(os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5")
-
-
-def get_args(known=False):
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
- parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=300, help='total training epochs')
- parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
- parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--noval', action='store_true', help='only validate final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor')
- parser.add_argument('--noplots', action='store_true', help='save no plot files')
- parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
- parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
- parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--quad', action='store_true', help='quad dataloader')
- parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')
- parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
- parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
- parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2')
- parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
- parser.add_argument('--seed', type=int, default=0, help='Global training seed')
- parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify')
-
- # Weights & Biases arguments
- parser.add_argument('--entity', default=None, help='W&B: Entity')
- parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option')
- parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
- parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')
-
- # Comet Arguments
- parser.add_argument("--comet_optimizer_config", type=str, help="Comet: Path to a Comet Optimizer Config File.")
- parser.add_argument("--comet_optimizer_id", type=str, help="Comet: ID of the Comet Optimizer sweep.")
- parser.add_argument("--comet_optimizer_objective", type=str, help="Comet: Set to 'minimize' or 'maximize'.")
- parser.add_argument("--comet_optimizer_metric", type=str, help="Comet: Metric to Optimize.")
- parser.add_argument("--comet_optimizer_workers",
- type=int,
- default=1,
- help="Comet: Number of Parallel Workers to use with the Comet Optimizer.")
-
- return parser.parse_known_args()[0] if known else parser.parse_args()
-
-
-def run(parameters, opt):
- hyp_dict = {k: v for k, v in parameters.items() if k not in ["epochs", "batch_size"]}
-
- opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve))
- opt.batch_size = parameters.get("batch_size")
- opt.epochs = parameters.get("epochs")
-
- device = select_device(opt.device, batch_size=opt.batch_size)
- train(hyp_dict, opt, device, callbacks=Callbacks())
-
-
-if __name__ == "__main__":
- opt = get_args(known=True)
-
- opt.weights = str(opt.weights)
- opt.cfg = str(opt.cfg)
- opt.data = str(opt.data)
- opt.project = str(opt.project)
-
- optimizer_id = os.getenv("COMET_OPTIMIZER_ID")
- if optimizer_id is None:
- with open(opt.comet_optimizer_config) as f:
- optimizer_config = json.load(f)
- optimizer = comet_ml.Optimizer(optimizer_config)
- else:
- optimizer = comet_ml.Optimizer(optimizer_id)
-
- opt.comet_optimizer_id = optimizer.id
- status = optimizer.status()
-
- opt.comet_optimizer_objective = status["spec"]["objective"]
- opt.comet_optimizer_metric = status["spec"]["metric"]
-
- logger.info("COMET INFO: Starting Hyperparameter Sweep")
- for parameter in optimizer.get_parameters():
- run(parameter["parameters"], opt)
diff --git a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/zoom_in_app.py b/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/zoom_in_app.py
deleted file mode 100644
index 19dfba3b99d249b96ba3ec7d57accc329ac22df0..0000000000000000000000000000000000000000
--- a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/zoom_in_app.py
+++ /dev/null
@@ -1,186 +0,0 @@
-import os
-
-import gradio as gr
-import numpy as np
-import torch
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-from PIL import Image
-
-from video_diffusion.inpaint_zoom.utils.zoom_in_utils import dummy, image_grid, shrink_and_paste_on_blank, write_video
-
-os.environ["CUDA_VISIBLE_DEVICES"] = "0"
-
-
-stable_paint_model_list = ["stabilityai/stable-diffusion-2-inpainting", "runwayml/stable-diffusion-inpainting"]
-
-stable_paint_prompt_list = [
- "children running in the forest , sunny, bright, by studio ghibli painting, superior quality, masterpiece, traditional Japanese colors, by Grzegorz Rutkowski, concept art",
- "A beautiful landscape of a mountain range with a lake in the foreground",
-]
-
-stable_paint_negative_prompt_list = [
- "lurry, bad art, blurred, text, watermark",
-]
-
-
-class StableDiffusionZoomIn:
- def __init__(self):
- self.pipe = None
-
- def load_model(self, model_id):
- if self.pipe is None:
- self.pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16")
- self.pipe.scheduler = DPMSolverMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe = self.pipe.to("cuda")
- self.pipe.safety_checker = dummy
- self.pipe.enable_attention_slicing()
- self.pipe.enable_xformers_memory_efficient_attention()
- self.g_cuda = torch.Generator(device="cuda")
-
- return self.pipe
-
- def generate_video(
- self,
- model_id,
- prompt,
- negative_prompt,
- guidance_scale,
- num_inference_steps,
- ):
- pipe = self.load_model(model_id)
-
- num_init_images = 2
- seed = 42
- height = 512
- width = height
-
- current_image = Image.new(mode="RGBA", size=(height, width))
- mask_image = np.array(current_image)[:, :, 3]
- mask_image = Image.fromarray(255 - mask_image).convert("RGB")
- current_image = current_image.convert("RGB")
-
- init_images = pipe(
- prompt=[prompt] * num_init_images,
- negative_prompt=[negative_prompt] * num_init_images,
- image=current_image,
- guidance_scale=guidance_scale,
- height=height,
- width=width,
- generator=self.g_cuda.manual_seed(seed),
- mask_image=mask_image,
- num_inference_steps=num_inference_steps,
- )[0]
-
- image_grid(init_images, rows=1, cols=num_init_images)
-
- init_image_selected = 1 # @param
- if num_init_images == 1:
- init_image_selected = 0
- else:
- init_image_selected = init_image_selected - 1
-
- num_outpainting_steps = 20 # @param
- mask_width = 128 # @param
- num_interpol_frames = 30 # @param
-
- current_image = init_images[init_image_selected]
- all_frames = []
- all_frames.append(current_image)
-
- for i in range(num_outpainting_steps):
- print("Generating image: " + str(i + 1) + " / " + str(num_outpainting_steps))
-
- prev_image_fix = current_image
-
- prev_image = shrink_and_paste_on_blank(current_image, mask_width)
-
- current_image = prev_image
-
- # create mask (black image with white mask_width width edges)
- mask_image = np.array(current_image)[:, :, 3]
- mask_image = Image.fromarray(255 - mask_image).convert("RGB")
-
- # inpainting step
- current_image = current_image.convert("RGB")
- images = pipe(
- prompt=prompt,
- negative_prompt=negative_prompt,
- image=current_image,
- guidance_scale=guidance_scale,
- height=height,
- width=width,
- # this can make the whole thing deterministic but the output less exciting
- # generator = g_cuda.manual_seed(seed),
- mask_image=mask_image,
- num_inference_steps=num_inference_steps,
- )[0]
- current_image = images[0]
- current_image.paste(prev_image, mask=prev_image)
-
- # interpolation steps bewteen 2 inpainted images (=sequential zoom and crop)
- for j in range(num_interpol_frames - 1):
- interpol_image = current_image
- interpol_width = round(
- (1 - (1 - 2 * mask_width / height) ** (1 - (j + 1) / num_interpol_frames)) * height / 2
- )
- interpol_image = interpol_image.crop(
- (interpol_width, interpol_width, width - interpol_width, height - interpol_width)
- )
-
- interpol_image = interpol_image.resize((height, width))
-
- # paste the higher resolution previous image in the middle to avoid drop in quality caused by zooming
- interpol_width2 = round((1 - (height - 2 * mask_width) / (height - 2 * interpol_width)) / 2 * height)
- prev_image_fix_crop = shrink_and_paste_on_blank(prev_image_fix, interpol_width2)
- interpol_image.paste(prev_image_fix_crop, mask=prev_image_fix_crop)
-
- all_frames.append(interpol_image)
-
- all_frames.append(current_image)
-
- video_file_name = "infinite_zoom_out"
- fps = 30
- save_path = video_file_name + ".mp4"
- write_video(save_path, all_frames, fps)
- return save_path
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- text2image_in_model_path = gr.Dropdown(
- choices=stable_paint_model_list, value=stable_paint_model_list[0], label="Text-Image Model Id"
- )
-
- text2image_in_prompt = gr.Textbox(lines=2, value=stable_paint_prompt_list[0], label="Prompt")
-
- text2image_in_negative_prompt = gr.Textbox(
- lines=1, value=stable_paint_negative_prompt_list[0], label="Negative Prompt"
- )
-
- with gr.Row():
- with gr.Column():
- text2image_in_guidance_scale = gr.Slider(
- minimum=0.1, maximum=15, step=0.1, value=7.5, label="Guidance Scale"
- )
-
- text2image_in_num_inference_step = gr.Slider(
- minimum=1, maximum=100, step=1, value=50, label="Num Inference Step"
- )
-
- text2image_in_predict = gr.Button(value="Generator")
-
- with gr.Column():
- output_image = gr.Video(label="Output")
-
- text2image_in_predict.click(
- fn=StableDiffusionZoomIn().generate_video,
- inputs=[
- text2image_in_model_path,
- text2image_in_prompt,
- text2image_in_negative_prompt,
- text2image_in_guidance_scale,
- text2image_in_num_inference_step,
- ],
- outputs=output_image,
- )
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/exceptions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/exceptions.py
deleted file mode 100644
index a38447bb05bd5d503a32651d6046ff8667785c0c..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/exceptions.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# exceptions.py
-
-import re
-import sys
-import typing
-
-from .util import col, line, lineno, _collapse_string_to_ranges
-from .unicode import pyparsing_unicode as ppu
-
-
-class ExceptionWordUnicode(ppu.Latin1, ppu.LatinA, ppu.LatinB, ppu.Greek, ppu.Cyrillic):
- pass
-
-
-_extract_alphanums = _collapse_string_to_ranges(ExceptionWordUnicode.alphanums)
-_exception_word_extractor = re.compile("([" + _extract_alphanums + "]{1,16})|.")
-
-
-class ParseBaseException(Exception):
- """base exception class for all parsing runtime exceptions"""
-
- # Performance tuning: we construct a *lot* of these, so keep this
- # constructor as small and fast as possible
- def __init__(
- self,
- pstr: str,
- loc: int = 0,
- msg: typing.Optional[str] = None,
- elem=None,
- ):
- self.loc = loc
- if msg is None:
- self.msg = pstr
- self.pstr = ""
- else:
- self.msg = msg
- self.pstr = pstr
- self.parser_element = self.parserElement = elem
- self.args = (pstr, loc, msg)
-
- @staticmethod
- def explain_exception(exc, depth=16):
- """
- Method to take an exception and translate the Python internal traceback into a list
- of the pyparsing expressions that caused the exception to be raised.
-
- Parameters:
-
- - exc - exception raised during parsing (need not be a ParseException, in support
- of Python exceptions that might be raised in a parse action)
- - depth (default=16) - number of levels back in the stack trace to list expression
- and function names; if None, the full stack trace names will be listed; if 0, only
- the failing input line, marker, and exception string will be shown
-
- Returns a multi-line string listing the ParserElements and/or function names in the
- exception's stack trace.
- """
- import inspect
- from .core import ParserElement
-
- if depth is None:
- depth = sys.getrecursionlimit()
- ret = []
- if isinstance(exc, ParseBaseException):
- ret.append(exc.line)
- ret.append(" " * (exc.column - 1) + "^")
- ret.append("{}: {}".format(type(exc).__name__, exc))
-
- if depth > 0:
- callers = inspect.getinnerframes(exc.__traceback__, context=depth)
- seen = set()
- for i, ff in enumerate(callers[-depth:]):
- frm = ff[0]
-
- f_self = frm.f_locals.get("self", None)
- if isinstance(f_self, ParserElement):
- if frm.f_code.co_name not in ("parseImpl", "_parseNoCache"):
- continue
- if id(f_self) in seen:
- continue
- seen.add(id(f_self))
-
- self_type = type(f_self)
- ret.append(
- "{}.{} - {}".format(
- self_type.__module__, self_type.__name__, f_self
- )
- )
-
- elif f_self is not None:
- self_type = type(f_self)
- ret.append("{}.{}".format(self_type.__module__, self_type.__name__))
-
- else:
- code = frm.f_code
- if code.co_name in ("wrapper", ""):
- continue
-
- ret.append("{}".format(code.co_name))
-
- depth -= 1
- if not depth:
- break
-
- return "\n".join(ret)
-
- @classmethod
- def _from_exception(cls, pe):
- """
- internal factory method to simplify creating one type of ParseException
- from another - avoids having __init__ signature conflicts among subclasses
- """
- return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement)
-
- @property
- def line(self) -> str:
- """
- Return the line of text where the exception occurred.
- """
- return line(self.loc, self.pstr)
-
- @property
- def lineno(self) -> int:
- """
- Return the 1-based line number of text where the exception occurred.
- """
- return lineno(self.loc, self.pstr)
-
- @property
- def col(self) -> int:
- """
- Return the 1-based column on the line of text where the exception occurred.
- """
- return col(self.loc, self.pstr)
-
- @property
- def column(self) -> int:
- """
- Return the 1-based column on the line of text where the exception occurred.
- """
- return col(self.loc, self.pstr)
-
- def __str__(self) -> str:
- if self.pstr:
- if self.loc >= len(self.pstr):
- foundstr = ", found end of text"
- else:
- # pull out next word at error location
- found_match = _exception_word_extractor.match(self.pstr, self.loc)
- if found_match is not None:
- found = found_match.group(0)
- else:
- found = self.pstr[self.loc : self.loc + 1]
- foundstr = (", found %r" % found).replace(r"\\", "\\")
- else:
- foundstr = ""
- return "{}{} (at char {}), (line:{}, col:{})".format(
- self.msg, foundstr, self.loc, self.lineno, self.column
- )
-
- def __repr__(self):
- return str(self)
-
- def mark_input_line(self, marker_string: str = None, *, markerString=">!<") -> str:
- """
- Extracts the exception line from the input string, and marks
- the location of the exception with a special symbol.
- """
- markerString = marker_string if marker_string is not None else markerString
- line_str = self.line
- line_column = self.column - 1
- if markerString:
- line_str = "".join(
- (line_str[:line_column], markerString, line_str[line_column:])
- )
- return line_str.strip()
-
- def explain(self, depth=16) -> str:
- """
- Method to translate the Python internal traceback into a list
- of the pyparsing expressions that caused the exception to be raised.
-
- Parameters:
-
- - depth (default=16) - number of levels back in the stack trace to list expression
- and function names; if None, the full stack trace names will be listed; if 0, only
- the failing input line, marker, and exception string will be shown
-
- Returns a multi-line string listing the ParserElements and/or function names in the
- exception's stack trace.
-
- Example::
-
- expr = pp.Word(pp.nums) * 3
- try:
- expr.parse_string("123 456 A789")
- except pp.ParseException as pe:
- print(pe.explain(depth=0))
-
- prints::
-
- 123 456 A789
- ^
- ParseException: Expected W:(0-9), found 'A' (at char 8), (line:1, col:9)
-
- Note: the diagnostic output will include string representations of the expressions
- that failed to parse. These representations will be more helpful if you use `set_name` to
- give identifiable names to your expressions. Otherwise they will use the default string
- forms, which may be cryptic to read.
-
- Note: pyparsing's default truncation of exception tracebacks may also truncate the
- stack of expressions that are displayed in the ``explain`` output. To get the full listing
- of parser expressions, you may have to set ``ParserElement.verbose_stacktrace = True``
- """
- return self.explain_exception(self, depth)
-
- markInputline = mark_input_line
-
-
-class ParseException(ParseBaseException):
- """
- Exception thrown when a parse expression doesn't match the input string
-
- Example::
-
- try:
- Word(nums).set_name("integer").parse_string("ABC")
- except ParseException as pe:
- print(pe)
- print("column: {}".format(pe.column))
-
- prints::
-
- Expected integer (at char 0), (line:1, col:1)
- column: 1
-
- """
-
-
-class ParseFatalException(ParseBaseException):
- """
- User-throwable exception thrown when inconsistent parse content
- is found; stops all parsing immediately
- """
-
-
-class ParseSyntaxException(ParseFatalException):
- """
- Just like :class:`ParseFatalException`, but thrown internally
- when an :class:`ErrorStop` ('-' operator) indicates
- that parsing is to stop immediately because an unbacktrackable
- syntax error has been found.
- """
-
-
-class RecursiveGrammarException(Exception):
- """
- Exception thrown by :class:`ParserElement.validate` if the
- grammar could be left-recursive; parser may need to enable
- left recursion using :class:`ParserElement.enable_left_recursion`
- """
-
- def __init__(self, parseElementList):
- self.parseElementTrace = parseElementList
-
- def __str__(self) -> str:
- return "RecursiveGrammarException: {}".format(self.parseElementTrace)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_deprecation_warning.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_deprecation_warning.py
deleted file mode 100644
index 086b64dd3817c0c1a194ffc1959eeffdd2695bef..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_deprecation_warning.py
+++ /dev/null
@@ -1,7 +0,0 @@
-class SetuptoolsDeprecationWarning(Warning):
- """
- Base class for warning deprecations in ``setuptools``
-
- This class is not derived from ``DeprecationWarning``, and as such is
- visible by default.
- """
diff --git a/spaces/Audiogen/vector-search-demo/README.md b/spaces/Audiogen/vector-search-demo/README.md
deleted file mode 100644
index e1c652b323cc0ea40c29d78294694a1b786a7040..0000000000000000000000000000000000000000
--- a/spaces/Audiogen/vector-search-demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Vector Search Demo
-emoji: 💻
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
-license: unlicense
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/+server.ts b/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/+server.ts
deleted file mode 100644
index b00a89d06f429f81859f80b761359833e32fbcd6..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/+server.ts
+++ /dev/null
@@ -1,236 +0,0 @@
-import { PUBLIC_SEP_TOKEN } from "$env/static/public";
-import { buildPrompt } from "$lib/buildPrompt.js";
-import { abortedGenerations } from "$lib/server/abortedGenerations.js";
-import { collections } from "$lib/server/database.js";
-import { modelEndpoint } from "$lib/server/modelEndpoint.js";
-import type { Message } from "$lib/types/Message.js";
-import { concatUint8Arrays } from "$lib/utils/concatUint8Arrays.js";
-import { streamToAsyncIterable } from "$lib/utils/streamToAsyncIterable";
-import { trimPrefix } from "$lib/utils/trimPrefix.js";
-import { trimSuffix } from "$lib/utils/trimSuffix.js";
-import type { TextGenerationStreamOutput } from "@huggingface/inference";
-import { error } from "@sveltejs/kit";
-import { ObjectId } from "mongodb";
-import { z } from "zod";
-
-export async function POST({ request, fetch, locals, params }) {
- // todo: add validation on params.id
- const convId = new ObjectId(params.id);
- const date = new Date();
-
- const conv = await collections.conversations.findOne({
- _id: convId,
- sessionId: locals.sessionId,
- });
-
- if (!conv) {
- throw error(404, "Conversation not found");
- }
-
- const json = await request.json();
- const {
- inputs: newPrompt,
- options: { id: messageId, is_retry },
- } = z
- .object({
- inputs: z.string().trim().min(1),
- options: z.object({
- id: z.optional(z.string().uuid()),
- is_retry: z.optional(z.boolean()),
- }),
- })
- .parse(json);
-
- const messages = (() => {
- if (is_retry && messageId) {
- let retryMessageIdx = conv.messages.findIndex((message) => message.id === messageId);
- if (retryMessageIdx === -1) {
- retryMessageIdx = conv.messages.length;
- }
- return [
- ...conv.messages.slice(0, retryMessageIdx),
- { content: newPrompt, from: "user", id: messageId as Message["id"] },
- ];
- }
- return [
- ...conv.messages,
- { content: newPrompt, from: "user", id: (messageId as Message["id"]) || crypto.randomUUID() },
- ];
- })() satisfies Message[];
-
- // Todo: on-the-fly migration, remove later
- for (const message of messages) {
- if (!message.id) {
- message.id = crypto.randomUUID();
- }
- }
- const prompt = buildPrompt(messages);
-
- const randomEndpoint = modelEndpoint();
-
- const abortController = new AbortController();
-
- const resp = await fetch(randomEndpoint.endpoint, {
- headers: {
- "Content-Type": request.headers.get("Content-Type") ?? "application/json",
- Authorization: randomEndpoint.authorization,
- },
- method: "POST",
- body: JSON.stringify({
- ...json,
- inputs: prompt,
- }),
- signal: abortController.signal,
- });
-
- const [stream1, stream2] = resp.body!.tee();
-
- async function saveMessage() {
- let generated_text = await parseGeneratedText(stream2, convId, date, abortController);
-
- // We could also check if PUBLIC_ASSISTANT_MESSAGE_TOKEN is present and use it to slice the text
- if (generated_text.startsWith(prompt)) {
- generated_text = generated_text.slice(prompt.length);
- }
-
- generated_text = trimSuffix(trimPrefix(generated_text, "<|startoftext|>"), PUBLIC_SEP_TOKEN);
-
- messages.push({ from: "assistant", content: generated_text, id: crypto.randomUUID() });
-
- await collections.conversations.updateOne(
- {
- _id: convId,
- },
- {
- $set: {
- messages,
- updatedAt: new Date(),
- },
- }
- );
- }
-
- saveMessage().catch(console.error);
-
- // Todo: maybe we should wait for the message to be saved before ending the response - in case of errors
- return new Response(stream1, {
- headers: Object.fromEntries(resp.headers.entries()),
- status: resp.status,
- statusText: resp.statusText,
- });
-}
-
-export async function DELETE({ locals, params }) {
- const convId = new ObjectId(params.id);
-
- const conv = await collections.conversations.findOne({
- _id: convId,
- sessionId: locals.sessionId,
- });
-
- if (!conv) {
- throw error(404, "Conversation not found");
- }
-
- await collections.conversations.deleteOne({ _id: conv._id });
-
- return new Response();
-}
-
-async function parseGeneratedText(
- stream: ReadableStream,
- conversationId: ObjectId,
- promptedAt: Date,
- abortController: AbortController
-): Promise {
- const inputs: Uint8Array[] = [];
- for await (const input of streamToAsyncIterable(stream)) {
- inputs.push(input);
-
- const date = abortedGenerations.get(conversationId.toString());
-
- if (date && date > promptedAt) {
- abortController.abort("Cancelled by user");
- const completeInput = concatUint8Arrays(inputs);
-
- const lines = new TextDecoder()
- .decode(completeInput)
- .split("\n")
- .filter((line) => line.startsWith("data:"));
-
- const tokens = lines.map((line) => {
- try {
- const json: TextGenerationStreamOutput = JSON.parse(line.slice("data:".length));
- return json.token.text;
- } catch {
- return "";
- }
- });
- return tokens.join("");
- }
- }
-
- // Merge inputs into a single Uint8Array
- const completeInput = concatUint8Arrays(inputs);
-
- // Get last line starting with "data:" and parse it as JSON to get the generated text
- const message = new TextDecoder().decode(completeInput);
-
- let lastIndex = message.lastIndexOf("\ndata:");
- if (lastIndex === -1) {
- lastIndex = message.indexOf("data");
- }
-
- if (lastIndex === -1) {
- console.error("Could not parse in last message");
- }
-
- let lastMessage = message.slice(lastIndex).trim().slice("data:".length);
- if (lastMessage.includes("\n")) {
- lastMessage = lastMessage.slice(0, lastMessage.indexOf("\n"));
- }
-
- const lastMessageJSON = JSON.parse(lastMessage);
-
- if (lastMessageJSON.error) {
- throw new Error(lastMessageJSON.error);
- }
-
- const res = lastMessageJSON.generated_text;
-
- if (typeof res !== "string") {
- throw new Error("Could not parse generated text");
- }
-
- return res;
-}
-
-export async function PATCH({ request, locals, params }) {
- const { title } = z
- .object({ title: z.string().trim().min(1).max(100) })
- .parse(await request.json());
-
- const convId = new ObjectId(params.id);
-
- const conv = await collections.conversations.findOne({
- _id: convId,
- sessionId: locals.sessionId,
- });
-
- if (!conv) {
- throw error(404, "Conversation not found");
- }
-
- await collections.conversations.updateOne(
- {
- _id: convId,
- },
- {
- $set: {
- title,
- },
- }
- );
-
- return new Response();
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyparsing/helpers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyparsing/helpers.py
deleted file mode 100644
index 9588b3b780159a2a2d23c7f84a4404ec350e2b65..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyparsing/helpers.py
+++ /dev/null
@@ -1,1088 +0,0 @@
-# helpers.py
-import html.entities
-import re
-import typing
-
-from . import __diag__
-from .core import *
-from .util import _bslash, _flatten, _escape_regex_range_chars
-
-
-#
-# global helpers
-#
-def delimited_list(
- expr: Union[str, ParserElement],
- delim: Union[str, ParserElement] = ",",
- combine: bool = False,
- min: typing.Optional[int] = None,
- max: typing.Optional[int] = None,
- *,
- allow_trailing_delim: bool = False,
-) -> ParserElement:
- """Helper to define a delimited list of expressions - the delimiter
- defaults to ','. By default, the list elements and delimiters can
- have intervening whitespace, and comments, but this can be
- overridden by passing ``combine=True`` in the constructor. If
- ``combine`` is set to ``True``, the matching tokens are
- returned as a single token string, with the delimiters included;
- otherwise, the matching tokens are returned as a list of tokens,
- with the delimiters suppressed.
-
- If ``allow_trailing_delim`` is set to True, then the list may end with
- a delimiter.
-
- Example::
-
- delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc']
- delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE']
- """
- if isinstance(expr, str_type):
- expr = ParserElement._literalStringClass(expr)
-
- dlName = "{expr} [{delim} {expr}]...{end}".format(
- expr=str(expr.copy().streamline()),
- delim=str(delim),
- end=" [{}]".format(str(delim)) if allow_trailing_delim else "",
- )
-
- if not combine:
- delim = Suppress(delim)
-
- if min is not None:
- if min < 1:
- raise ValueError("min must be greater than 0")
- min -= 1
- if max is not None:
- if min is not None and max <= min:
- raise ValueError("max must be greater than, or equal to min")
- max -= 1
- delimited_list_expr = expr + (delim + expr)[min, max]
-
- if allow_trailing_delim:
- delimited_list_expr += Opt(delim)
-
- if combine:
- return Combine(delimited_list_expr).set_name(dlName)
- else:
- return delimited_list_expr.set_name(dlName)
-
-
-def counted_array(
- expr: ParserElement,
- int_expr: typing.Optional[ParserElement] = None,
- *,
- intExpr: typing.Optional[ParserElement] = None,
-) -> ParserElement:
- """Helper to define a counted list of expressions.
-
- This helper defines a pattern of the form::
-
- integer expr expr expr...
-
- where the leading integer tells how many expr expressions follow.
- The matched tokens returns the array of expr tokens as a list - the
- leading count token is suppressed.
-
- If ``int_expr`` is specified, it should be a pyparsing expression
- that produces an integer value.
-
- Example::
-
- counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd']
-
- # in this parser, the leading integer value is given in binary,
- # '10' indicating that 2 values are in the array
- binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2))
- counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd']
-
- # if other fields must be parsed after the count but before the
- # list items, give the fields results names and they will
- # be preserved in the returned ParseResults:
- count_with_metadata = integer + Word(alphas)("type")
- typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items")
- result = typed_array.parse_string("3 bool True True False")
- print(result.dump())
-
- # prints
- # ['True', 'True', 'False']
- # - items: ['True', 'True', 'False']
- # - type: 'bool'
- """
- intExpr = intExpr or int_expr
- array_expr = Forward()
-
- def count_field_parse_action(s, l, t):
- nonlocal array_expr
- n = t[0]
- array_expr <<= (expr * n) if n else Empty()
- # clear list contents, but keep any named results
- del t[:]
-
- if intExpr is None:
- intExpr = Word(nums).set_parse_action(lambda t: int(t[0]))
- else:
- intExpr = intExpr.copy()
- intExpr.set_name("arrayLen")
- intExpr.add_parse_action(count_field_parse_action, call_during_try=True)
- return (intExpr + array_expr).set_name("(len) " + str(expr) + "...")
-
-
-def match_previous_literal(expr: ParserElement) -> ParserElement:
- """Helper to define an expression that is indirectly defined from
- the tokens matched in a previous expression, that is, it looks for
- a 'repeat' of a previous expression. For example::
-
- first = Word(nums)
- second = match_previous_literal(first)
- match_expr = first + ":" + second
-
- will match ``"1:1"``, but not ``"1:2"``. Because this
- matches a previous literal, will also match the leading
- ``"1:1"`` in ``"1:10"``. If this is not desired, use
- :class:`match_previous_expr`. Do *not* use with packrat parsing
- enabled.
- """
- rep = Forward()
-
- def copy_token_to_repeater(s, l, t):
- if t:
- if len(t) == 1:
- rep << t[0]
- else:
- # flatten t tokens
- tflat = _flatten(t.as_list())
- rep << And(Literal(tt) for tt in tflat)
- else:
- rep << Empty()
-
- expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
- rep.set_name("(prev) " + str(expr))
- return rep
-
-
-def match_previous_expr(expr: ParserElement) -> ParserElement:
- """Helper to define an expression that is indirectly defined from
- the tokens matched in a previous expression, that is, it looks for
- a 'repeat' of a previous expression. For example::
-
- first = Word(nums)
- second = match_previous_expr(first)
- match_expr = first + ":" + second
-
- will match ``"1:1"``, but not ``"1:2"``. Because this
- matches by expressions, will *not* match the leading ``"1:1"``
- in ``"1:10"``; the expressions are evaluated first, and then
- compared, so ``"1"`` is compared with ``"10"``. Do *not* use
- with packrat parsing enabled.
- """
- rep = Forward()
- e2 = expr.copy()
- rep <<= e2
-
- def copy_token_to_repeater(s, l, t):
- matchTokens = _flatten(t.as_list())
-
- def must_match_these_tokens(s, l, t):
- theseTokens = _flatten(t.as_list())
- if theseTokens != matchTokens:
- raise ParseException(
- s, l, "Expected {}, found{}".format(matchTokens, theseTokens)
- )
-
- rep.set_parse_action(must_match_these_tokens, callDuringTry=True)
-
- expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
- rep.set_name("(prev) " + str(expr))
- return rep
-
-
-def one_of(
- strs: Union[typing.Iterable[str], str],
- caseless: bool = False,
- use_regex: bool = True,
- as_keyword: bool = False,
- *,
- useRegex: bool = True,
- asKeyword: bool = False,
-) -> ParserElement:
- """Helper to quickly define a set of alternative :class:`Literal` s,
- and makes sure to do longest-first testing when there is a conflict,
- regardless of the input order, but returns
- a :class:`MatchFirst` for best performance.
-
- Parameters:
-
- - ``strs`` - a string of space-delimited literals, or a collection of
- string literals
- - ``caseless`` - treat all literals as caseless - (default= ``False``)
- - ``use_regex`` - as an optimization, will
- generate a :class:`Regex` object; otherwise, will generate
- a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if
- creating a :class:`Regex` raises an exception) - (default= ``True``)
- - ``as_keyword`` - enforce :class:`Keyword`-style matching on the
- generated expressions - (default= ``False``)
- - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility,
- but will be removed in a future release
-
- Example::
-
- comp_oper = one_of("< = > <= >= !=")
- var = Word(alphas)
- number = Word(nums)
- term = var | number
- comparison_expr = term + comp_oper + term
- print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12"))
-
- prints::
-
- [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']]
- """
- asKeyword = asKeyword or as_keyword
- useRegex = useRegex and use_regex
-
- if (
- isinstance(caseless, str_type)
- and __diag__.warn_on_multiple_string_args_to_oneof
- ):
- warnings.warn(
- "More than one string argument passed to one_of, pass"
- " choices as a list or space-delimited string",
- stacklevel=2,
- )
-
- if caseless:
- isequal = lambda a, b: a.upper() == b.upper()
- masks = lambda a, b: b.upper().startswith(a.upper())
- parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral
- else:
- isequal = lambda a, b: a == b
- masks = lambda a, b: b.startswith(a)
- parseElementClass = Keyword if asKeyword else Literal
-
- symbols: List[str] = []
- if isinstance(strs, str_type):
- symbols = strs.split()
- elif isinstance(strs, Iterable):
- symbols = list(strs)
- else:
- raise TypeError("Invalid argument to one_of, expected string or iterable")
- if not symbols:
- return NoMatch()
-
- # reorder given symbols to take care to avoid masking longer choices with shorter ones
- # (but only if the given symbols are not just single characters)
- if any(len(sym) > 1 for sym in symbols):
- i = 0
- while i < len(symbols) - 1:
- cur = symbols[i]
- for j, other in enumerate(symbols[i + 1 :]):
- if isequal(other, cur):
- del symbols[i + j + 1]
- break
- elif masks(cur, other):
- del symbols[i + j + 1]
- symbols.insert(i, other)
- break
- else:
- i += 1
-
- if useRegex:
- re_flags: int = re.IGNORECASE if caseless else 0
-
- try:
- if all(len(sym) == 1 for sym in symbols):
- # symbols are just single characters, create range regex pattern
- patt = "[{}]".format(
- "".join(_escape_regex_range_chars(sym) for sym in symbols)
- )
- else:
- patt = "|".join(re.escape(sym) for sym in symbols)
-
- # wrap with \b word break markers if defining as keywords
- if asKeyword:
- patt = r"\b(?:{})\b".format(patt)
-
- ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols))
-
- if caseless:
- # add parse action to return symbols as specified, not in random
- # casing as found in input string
- symbol_map = {sym.lower(): sym for sym in symbols}
- ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()])
-
- return ret
-
- except re.error:
- warnings.warn(
- "Exception creating Regex for one_of, building MatchFirst", stacklevel=2
- )
-
- # last resort, just use MatchFirst
- return MatchFirst(parseElementClass(sym) for sym in symbols).set_name(
- " | ".join(symbols)
- )
-
-
-def dict_of(key: ParserElement, value: ParserElement) -> ParserElement:
- """Helper to easily and clearly define a dictionary by specifying
- the respective patterns for the key and value. Takes care of
- defining the :class:`Dict`, :class:`ZeroOrMore`, and
- :class:`Group` tokens in the proper order. The key pattern
- can include delimiting markers or punctuation, as long as they are
- suppressed, thereby leaving the significant key text. The value
- pattern can include named results, so that the :class:`Dict` results
- can include named token fields.
-
- Example::
-
- text = "shape: SQUARE posn: upper left color: light blue texture: burlap"
- attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
- print(attr_expr[1, ...].parse_string(text).dump())
-
- attr_label = label
- attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)
-
- # similar to Dict, but simpler call format
- result = dict_of(attr_label, attr_value).parse_string(text)
- print(result.dump())
- print(result['shape'])
- print(result.shape) # object attribute access works too
- print(result.as_dict())
-
- prints::
-
- [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']]
- - color: 'light blue'
- - posn: 'upper left'
- - shape: 'SQUARE'
- - texture: 'burlap'
- SQUARE
- SQUARE
- {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'}
- """
- return Dict(OneOrMore(Group(key + value)))
-
-
-def original_text_for(
- expr: ParserElement, as_string: bool = True, *, asString: bool = True
-) -> ParserElement:
- """Helper to return the original, untokenized text for a given
- expression. Useful to restore the parsed fields of an HTML start
- tag into the raw tag text itself, or to revert separate tokens with
- intervening whitespace back to the original matching input text. By
- default, returns astring containing the original parsed text.
-
- If the optional ``as_string`` argument is passed as
- ``False``, then the return value is
- a :class:`ParseResults` containing any results names that
- were originally matched, and a single token containing the original
- matched text from the input string. So if the expression passed to
- :class:`original_text_for` contains expressions with defined
- results names, you must set ``as_string`` to ``False`` if you
- want to preserve those results name values.
-
- The ``asString`` pre-PEP8 argument is retained for compatibility,
- but will be removed in a future release.
-
- Example::
-
- src = "this is test bold text normal text "
- for tag in ("b", "i"):
- opener, closer = make_html_tags(tag)
- patt = original_text_for(opener + SkipTo(closer) + closer)
- print(patt.search_string(src)[0])
-
- prints::
-
- [' bold text']
- ['text']
- """
- asString = asString and as_string
-
- locMarker = Empty().set_parse_action(lambda s, loc, t: loc)
- endlocMarker = locMarker.copy()
- endlocMarker.callPreparse = False
- matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end")
- if asString:
- extractText = lambda s, l, t: s[t._original_start : t._original_end]
- else:
-
- def extractText(s, l, t):
- t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]]
-
- matchExpr.set_parse_action(extractText)
- matchExpr.ignoreExprs = expr.ignoreExprs
- matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection)
- return matchExpr
-
-
-def ungroup(expr: ParserElement) -> ParserElement:
- """Helper to undo pyparsing's default grouping of And expressions,
- even if all but one are non-empty.
- """
- return TokenConverter(expr).add_parse_action(lambda t: t[0])
-
-
-def locatedExpr(expr: ParserElement) -> ParserElement:
- """
- (DEPRECATED - future code should use the Located class)
- Helper to decorate a returned token with its starting and ending
- locations in the input string.
-
- This helper adds the following results names:
-
- - ``locn_start`` - location where matched expression begins
- - ``locn_end`` - location where matched expression ends
- - ``value`` - the actual parsed results
-
- Be careful if the input text contains ```` characters, you
- may want to call :class:`ParserElement.parseWithTabs`
-
- Example::
-
- wd = Word(alphas)
- for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"):
- print(match)
-
- prints::
-
- [[0, 'ljsdf', 5]]
- [[8, 'lksdjjf', 15]]
- [[18, 'lkkjj', 23]]
- """
- locator = Empty().set_parse_action(lambda ss, ll, tt: ll)
- return Group(
- locator("locn_start")
- + expr("value")
- + locator.copy().leaveWhitespace()("locn_end")
- )
-
-
-def nested_expr(
- opener: Union[str, ParserElement] = "(",
- closer: Union[str, ParserElement] = ")",
- content: typing.Optional[ParserElement] = None,
- ignore_expr: ParserElement = quoted_string(),
- *,
- ignoreExpr: ParserElement = quoted_string(),
-) -> ParserElement:
- """Helper method for defining nested lists enclosed in opening and
- closing delimiters (``"("`` and ``")"`` are the default).
-
- Parameters:
- - ``opener`` - opening character for a nested list
- (default= ``"("``); can also be a pyparsing expression
- - ``closer`` - closing character for a nested list
- (default= ``")"``); can also be a pyparsing expression
- - ``content`` - expression for items within the nested lists
- (default= ``None``)
- - ``ignore_expr`` - expression for ignoring opening and closing delimiters
- (default= :class:`quoted_string`)
- - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility
- but will be removed in a future release
-
- If an expression is not provided for the content argument, the
- nested expression will capture all whitespace-delimited content
- between delimiters as a list of separate values.
-
- Use the ``ignore_expr`` argument to define expressions that may
- contain opening or closing characters that should not be treated as
- opening or closing characters for nesting, such as quoted_string or
- a comment expression. Specify multiple expressions using an
- :class:`Or` or :class:`MatchFirst`. The default is
- :class:`quoted_string`, but if no expressions are to be ignored, then
- pass ``None`` for this argument.
-
- Example::
-
- data_type = one_of("void int short long char float double")
- decl_data_type = Combine(data_type + Opt(Word('*')))
- ident = Word(alphas+'_', alphanums+'_')
- number = pyparsing_common.number
- arg = Group(decl_data_type + ident)
- LPAR, RPAR = map(Suppress, "()")
-
- code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment))
-
- c_function = (decl_data_type("type")
- + ident("name")
- + LPAR + Opt(delimited_list(arg), [])("args") + RPAR
- + code_body("body"))
- c_function.ignore(c_style_comment)
-
- source_code = '''
- int is_odd(int x) {
- return (x%2);
- }
-
- int dec_to_hex(char hchar) {
- if (hchar >= '0' && hchar <= '9') {
- return (ord(hchar)-ord('0'));
- } else {
- return (10+ord(hchar)-ord('A'));
- }
- }
- '''
- for func in c_function.search_string(source_code):
- print("%(name)s (%(type)s) args: %(args)s" % func)
-
-
- prints::
-
- is_odd (int) args: [['int', 'x']]
- dec_to_hex (int) args: [['char', 'hchar']]
- """
- if ignoreExpr != ignore_expr:
- ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr
- if opener == closer:
- raise ValueError("opening and closing strings cannot be the same")
- if content is None:
- if isinstance(opener, str_type) and isinstance(closer, str_type):
- if len(opener) == 1 and len(closer) == 1:
- if ignoreExpr is not None:
- content = Combine(
- OneOrMore(
- ~ignoreExpr
- + CharsNotIn(
- opener + closer + ParserElement.DEFAULT_WHITE_CHARS,
- exact=1,
- )
- )
- ).set_parse_action(lambda t: t[0].strip())
- else:
- content = empty.copy() + CharsNotIn(
- opener + closer + ParserElement.DEFAULT_WHITE_CHARS
- ).set_parse_action(lambda t: t[0].strip())
- else:
- if ignoreExpr is not None:
- content = Combine(
- OneOrMore(
- ~ignoreExpr
- + ~Literal(opener)
- + ~Literal(closer)
- + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
- )
- ).set_parse_action(lambda t: t[0].strip())
- else:
- content = Combine(
- OneOrMore(
- ~Literal(opener)
- + ~Literal(closer)
- + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
- )
- ).set_parse_action(lambda t: t[0].strip())
- else:
- raise ValueError(
- "opening and closing arguments must be strings if no content expression is given"
- )
- ret = Forward()
- if ignoreExpr is not None:
- ret <<= Group(
- Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer)
- )
- else:
- ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer))
- ret.set_name("nested %s%s expression" % (opener, closer))
- return ret
-
-
-def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")):
- """Internal helper to construct opening and closing tag expressions, given a tag name"""
- if isinstance(tagStr, str_type):
- resname = tagStr
- tagStr = Keyword(tagStr, caseless=not xml)
- else:
- resname = tagStr.name
-
- tagAttrName = Word(alphas, alphanums + "_-:")
- if xml:
- tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes)
- openTag = (
- suppress_LT
- + tagStr("tag")
- + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue)))
- + Opt("/", default=[False])("empty").set_parse_action(
- lambda s, l, t: t[0] == "/"
- )
- + suppress_GT
- )
- else:
- tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word(
- printables, exclude_chars=">"
- )
- openTag = (
- suppress_LT
- + tagStr("tag")
- + Dict(
- ZeroOrMore(
- Group(
- tagAttrName.set_parse_action(lambda t: t[0].lower())
- + Opt(Suppress("=") + tagAttrValue)
- )
- )
- )
- + Opt("/", default=[False])("empty").set_parse_action(
- lambda s, l, t: t[0] == "/"
- )
- + suppress_GT
- )
- closeTag = Combine(Literal("") + tagStr + ">", adjacent=False)
-
- openTag.set_name("<%s>" % resname)
- # add start results name in parse action now that ungrouped names are not reported at two levels
- openTag.add_parse_action(
- lambda t: t.__setitem__(
- "start" + "".join(resname.replace(":", " ").title().split()), t.copy()
- )
- )
- closeTag = closeTag(
- "end" + "".join(resname.replace(":", " ").title().split())
- ).set_name("%s>" % resname)
- openTag.tag = resname
- closeTag.tag = resname
- openTag.tag_body = SkipTo(closeTag())
- return openTag, closeTag
-
-
-def make_html_tags(
- tag_str: Union[str, ParserElement]
-) -> Tuple[ParserElement, ParserElement]:
- """Helper to construct opening and closing tag expressions for HTML,
- given a tag name. Matches tags in either upper or lower case,
- attributes with namespaces and with quoted or unquoted values.
-
- Example::
-
- text = '
- )
-}
diff --git a/spaces/Hoodady/3DFuse/ldm/models/autoencoder.py b/spaces/Hoodady/3DFuse/ldm/models/autoencoder.py
deleted file mode 100644
index d122549995ce2cd64092c81a58419ed4a15a02fd..0000000000000000000000000000000000000000
--- a/spaces/Hoodady/3DFuse/ldm/models/autoencoder.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import torch
-import pytorch_lightning as pl
-import torch.nn.functional as F
-from contextlib import contextmanager
-
-from ldm.modules.diffusionmodules.model import Encoder, Decoder
-from ldm.modules.distributions.distributions import DiagonalGaussianDistribution
-
-from ldm.util import instantiate_from_config
-from ldm.modules.ema import LitEma
-
-
-class AutoencoderKL(pl.LightningModule):
- def __init__(self,
- ddconfig,
- lossconfig,
- embed_dim,
- ckpt_path=None,
- ignore_keys=[],
- image_key="image",
- colorize_nlabels=None,
- monitor=None,
- ema_decay=None,
- learn_logvar=False
- ):
- super().__init__()
- self.learn_logvar = learn_logvar
- self.image_key = image_key
- self.encoder = Encoder(**ddconfig)
- self.decoder = Decoder(**ddconfig)
- self.loss = instantiate_from_config(lossconfig)
- assert ddconfig["double_z"]
- self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1)
- self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
- self.embed_dim = embed_dim
- if colorize_nlabels is not None:
- assert type(colorize_nlabels)==int
- self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
- if monitor is not None:
- self.monitor = monitor
-
- self.use_ema = ema_decay is not None
- if self.use_ema:
- self.ema_decay = ema_decay
- assert 0. < ema_decay < 1.
- self.model_ema = LitEma(self, decay=ema_decay)
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
-
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list()):
- sd = torch.load(path, map_location="cpu")["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- self.load_state_dict(sd, strict=False)
- print(f"Restored from {path}")
-
- @contextmanager
- def ema_scope(self, context=None):
- if self.use_ema:
- self.model_ema.store(self.parameters())
- self.model_ema.copy_to(self)
- if context is not None:
- print(f"{context}: Switched to EMA weights")
- try:
- yield None
- finally:
- if self.use_ema:
- self.model_ema.restore(self.parameters())
- if context is not None:
- print(f"{context}: Restored training weights")
-
- def on_train_batch_end(self, *args, **kwargs):
- if self.use_ema:
- self.model_ema(self)
-
- def encode(self, x):
- h = self.encoder(x)
- moments = self.quant_conv(h)
- posterior = DiagonalGaussianDistribution(moments)
- return posterior
-
- def decode(self, z):
- z = self.post_quant_conv(z)
- dec = self.decoder(z)
- return dec
-
- def forward(self, input, sample_posterior=True):
- posterior = self.encode(input)
- if sample_posterior:
- z = posterior.sample()
- else:
- z = posterior.mode()
- dec = self.decode(z)
- return dec, posterior
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
- return x
-
- def training_step(self, batch, batch_idx, optimizer_idx):
- inputs = self.get_input(batch, self.image_key)
- reconstructions, posterior = self(inputs)
-
- if optimizer_idx == 0:
- # train encoder+decoder+logvar
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
- self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)
- return aeloss
-
- if optimizer_idx == 1:
- # train the discriminator
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
-
- self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)
- return discloss
-
- def validation_step(self, batch, batch_idx):
- log_dict = self._validation_step(batch, batch_idx)
- with self.ema_scope():
- log_dict_ema = self._validation_step(batch, batch_idx, postfix="_ema")
- return log_dict
-
- def _validation_step(self, batch, batch_idx, postfix=""):
- inputs = self.get_input(batch, self.image_key)
- reconstructions, posterior = self(inputs)
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step,
- last_layer=self.get_last_layer(), split="val"+postfix)
-
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step,
- last_layer=self.get_last_layer(), split="val"+postfix)
-
- self.log(f"val{postfix}/rec_loss", log_dict_ae[f"val{postfix}/rec_loss"])
- self.log_dict(log_dict_ae)
- self.log_dict(log_dict_disc)
- return self.log_dict
-
- def configure_optimizers(self):
- lr = self.learning_rate
- ae_params_list = list(self.encoder.parameters()) + list(self.decoder.parameters()) + list(
- self.quant_conv.parameters()) + list(self.post_quant_conv.parameters())
- if self.learn_logvar:
- print(f"{self.__class__.__name__}: Learning logvar")
- ae_params_list.append(self.loss.logvar)
- opt_ae = torch.optim.Adam(ae_params_list,
- lr=lr, betas=(0.5, 0.9))
- opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
- lr=lr, betas=(0.5, 0.9))
- return [opt_ae, opt_disc], []
-
- def get_last_layer(self):
- return self.decoder.conv_out.weight
-
- @torch.no_grad()
- def log_images(self, batch, only_inputs=False, log_ema=False, **kwargs):
- log = dict()
- x = self.get_input(batch, self.image_key)
- x = x.to(self.device)
- if not only_inputs:
- xrec, posterior = self(x)
- if x.shape[1] > 3:
- # colorize with random projection
- assert xrec.shape[1] > 3
- x = self.to_rgb(x)
- xrec = self.to_rgb(xrec)
- log["samples"] = self.decode(torch.randn_like(posterior.sample()))
- log["reconstructions"] = xrec
- if log_ema or self.use_ema:
- with self.ema_scope():
- xrec_ema, posterior_ema = self(x)
- if x.shape[1] > 3:
- # colorize with random projection
- assert xrec_ema.shape[1] > 3
- xrec_ema = self.to_rgb(xrec_ema)
- log["samples_ema"] = self.decode(torch.randn_like(posterior_ema.sample()))
- log["reconstructions_ema"] = xrec_ema
- log["inputs"] = x
- return log
-
- def to_rgb(self, x):
- assert self.image_key == "segmentation"
- if not hasattr(self, "colorize"):
- self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
- x = F.conv2d(x, weight=self.colorize)
- x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
- return x
-
-
-class IdentityFirstStage(torch.nn.Module):
- def __init__(self, *args, vq_interface=False, **kwargs):
- self.vq_interface = vq_interface
- super().__init__()
-
- def encode(self, x, *args, **kwargs):
- return x
-
- def decode(self, x, *args, **kwargs):
- return x
-
- def quantize(self, x, *args, **kwargs):
- if self.vq_interface:
- return x, None, [None, None, None]
- return x
-
- def forward(self, x, *args, **kwargs):
- return x
-
diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/losses/vqperceptual.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/losses/vqperceptual.py
deleted file mode 100644
index fd3874011472c423f059e573029564e979dd225d..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/losses/vqperceptual.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from taming.modules.losses.lpips import LPIPS
-from taming.modules.discriminator.model import NLayerDiscriminator, weights_init
-
-
-class DummyLoss(nn.Module):
- def __init__(self):
- super().__init__()
-
-
-def adopt_weight(weight, global_step, threshold=0, value=0.0):
- if global_step < threshold:
- weight = value
- return weight
-
-
-def hinge_d_loss(logits_real, logits_fake):
- loss_real = torch.mean(F.relu(1.0 - logits_real))
- loss_fake = torch.mean(F.relu(1.0 + logits_fake))
- d_loss = 0.5 * (loss_real + loss_fake)
- return d_loss
-
-
-def vanilla_d_loss(logits_real, logits_fake):
- d_loss = 0.5 * (
- torch.mean(torch.nn.functional.softplus(-logits_real))
- + torch.mean(torch.nn.functional.softplus(logits_fake))
- )
- return d_loss
-
-
-class VQLPIPSWithDiscriminator(nn.Module):
- def __init__(
- self,
- disc_start,
- codebook_weight=1.0,
- pixelloss_weight=1.0,
- disc_num_layers=3,
- disc_in_channels=3,
- disc_factor=1.0,
- disc_weight=1.0,
- perceptual_weight=1.0,
- use_actnorm=False,
- disc_conditional=False,
- disc_ndf=64,
- disc_loss="hinge",
- ):
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- self.codebook_weight = codebook_weight
- self.pixel_weight = pixelloss_weight
- self.perceptual_loss = LPIPS().eval()
- self.perceptual_weight = perceptual_weight
-
- self.discriminator = NLayerDiscriminator(
- input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm,
- ndf=disc_ndf,
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- if disc_loss == "hinge":
- self.disc_loss = hinge_d_loss
- elif disc_loss == "vanilla":
- self.disc_loss = vanilla_d_loss
- else:
- raise ValueError(f"Unknown GAN loss '{disc_loss}'.")
- print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.")
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(
- nll_loss, self.last_layer[0], retain_graph=True
- )[0]
- g_grads = torch.autograd.grad(
- g_loss, self.last_layer[0], retain_graph=True
- )[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(
- self,
- codebook_loss,
- inputs,
- reconstructions,
- optimizer_idx,
- global_step,
- last_layer=None,
- cond=None,
- split="train",
- ):
- rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(
- inputs.contiguous(), reconstructions.contiguous()
- )
- rec_loss = rec_loss + self.perceptual_weight * p_loss
- else:
- p_loss = torch.tensor([0.0])
-
- nll_loss = rec_loss
- # nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- nll_loss = torch.mean(nll_loss)
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(
- torch.cat((reconstructions.contiguous(), cond), dim=1)
- )
- g_loss = -torch.mean(logits_fake)
-
- try:
- d_weight = self.calculate_adaptive_weight(
- nll_loss, g_loss, last_layer=last_layer
- )
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(
- self.disc_factor, global_step, threshold=self.discriminator_iter_start
- )
- loss = (
- nll_loss
- + d_weight * disc_factor * g_loss
- + self.codebook_weight * codebook_loss.mean()
- )
-
- log = {
- "{}/total_loss".format(split): loss.clone().detach().mean(),
- "{}/quant_loss".format(split): codebook_loss.detach().mean(),
- "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/p_loss".format(split): p_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(
- torch.cat((inputs.contiguous().detach(), cond), dim=1)
- )
- logits_fake = self.discriminator(
- torch.cat((reconstructions.contiguous().detach(), cond), dim=1)
- )
-
- disc_factor = adopt_weight(
- self.disc_factor, global_step, threshold=self.discriminator_iter_start
- )
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {
- "{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean(),
- }
- return d_loss, log
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/labels/labels.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/labels/labels.py
deleted file mode 100644
index 2f78c1ae0f2283645231d8e16425fdc3b31703d2..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/labels/labels.py
+++ /dev/null
@@ -1,236 +0,0 @@
-import evaluate
-import logging
-import os
-import pandas as pd
-import plotly.express as px
-import utils
-import utils.dataset_utils as ds_utils
-from collections import Counter
-from os.path import exists, isdir
-from os.path import join as pjoin
-
-LABEL_FIELD = "labels"
-LABEL_NAMES = "label_names"
-LABEL_LIST = "label_list"
-LABEL_MEASUREMENT = "label_measurement"
-# Specific to the evaluate library
-EVAL_LABEL_MEASURE = "label_distribution"
-EVAL_LABEL_ID = "labels"
-EVAL_LABEL_FRAC = "fractions"
-# TODO: This should ideally be in what's returned from the evaluate library
-EVAL_LABEL_SUM = "sums"
-
-logs = utils.prepare_logging(__file__)
-
-
-def map_labels(label_field, ds_name_to_dict, ds_name, config_name):
- try:
- label_field, label_names = (
- ds_name_to_dict[ds_name][config_name]["features"][label_field][0]
- if len(
- ds_name_to_dict[ds_name][config_name]["features"][label_field]) > 0
- else ((), [])
- )
- except KeyError as e:
- logs.exception(e)
- logs.warning("Not returning a label-name mapping")
- return []
- return label_names
-
-
-def make_label_results_dict(label_measurement, label_names):
- label_dict = {LABEL_MEASUREMENT: label_measurement,
- LABEL_NAMES: label_names}
- return label_dict
-
-
-def make_label_fig(label_results, chart_type="pie"):
- try:
- label_names = label_results[LABEL_NAMES]
- label_measurement = label_results[LABEL_MEASUREMENT]
- label_sums = label_measurement[EVAL_LABEL_SUM]
- if chart_type == "bar":
- fig_labels = plt.bar(
- label_measurement[EVAL_LABEL_MEASURE][EVAL_LABEL_ID],
- label_measurement[EVAL_LABEL_MEASURE][EVAL_LABEL_FRAC])
- else:
- if chart_type != "pie":
- logs.info("Oops! Don't have that chart-type implemented.")
- logs.info("Making the default pie chart")
- # IMDB - unsupervised has a labels column where all values are -1,
- # which breaks the assumption that
- # the number of label_names == the number of label_sums.
- # This handles that case, assuming it will happen in other datasets.
- if len(label_names) != len(label_sums):
- logs.warning("Can't make a figure with the given label names: "
- "We don't have the right amount of label types "
- "to apply them to!")
- return False
- fig_labels = px.pie(names=label_names, values=label_sums)
- except KeyError:
- logs.info("Input label data missing required key(s).")
- logs.info("We require %s, %s" % (LABEL_NAMES, LABEL_MEASUREMENT))
- logs.info("We found: %s" % ",".join(label_results.keys()))
- return False
- return fig_labels
-
-
-def extract_label_names(label_field, ds_name, config_name):
- ds_name_to_dict = ds_utils.get_dataset_info_dicts(ds_name)
- label_names = map_labels(label_field, ds_name_to_dict, ds_name, config_name)
- return label_names
-
-
-class DMTHelper:
- """Helper class for the Data Measurements Tool.
- This allows us to keep all variables and functions related to labels
- in one file.
- """
-
- def __init__(self, dstats, load_only, save):
- logs.info("Initializing labels.")
- # -- Data Measurements Tool variables
- self.label_results = dstats.label_results
- self.fig_labels = dstats.fig_labels
- self.use_cache = dstats.use_cache
- self.cache_dir = dstats.dataset_cache_dir
- self.load_only = load_only
- self.save = save
- # -- Hugging Face Dataset variables
- self.label_field = dstats.label_field
- # Input HuggingFace dataset
- self.dset = dstats.dset
- self.dset_name = dstats.dset_name
- self.dset_config = dstats.dset_config
- self.label_names = dstats.label_names
- # -- Filenames
- self.label_dir = "labels"
- label_json = "labels.json"
- label_fig_json = "labels_fig.json"
- label_fig_html = "labels_fig.html"
- self.labels_json_fid = pjoin(self.cache_dir, self.label_dir,
- label_json)
- self.labels_fig_json_fid = pjoin(self.cache_dir, self.label_dir,
- label_fig_json)
- self.labels_fig_html_fid = pjoin(self.cache_dir, self.label_dir,
- label_fig_html)
-
- def run_DMT_processing(self):
- """
- Loads or prepares the Labels measurements and figure as specified by
- the DMT options.
- """
- # First look to see what we can load from cache.
- if self.use_cache:
- logs.info("Trying to load labels.")
- self.fig_labels, self.label_results = self._load_label_cache()
- if self.fig_labels:
- logs.info("Loaded cached label figure.")
- if self.label_results:
- logs.info("Loaded cached label results.")
- # If we can prepare the results afresh...
- if not self.load_only:
- # If we didn't load them already, compute label statistics.
- if not self.label_results:
- logs.info("Preparing labels.")
- self.label_results = self._prepare_labels()
- # If we didn't load it already, create figure.
- if not self.fig_labels:
- logs.info("Creating label figure.")
- self.fig_labels = \
- make_label_fig(self.label_results)
- # Finish
- if self.save:
- self._write_label_cache()
-
- def _load_label_cache(self):
- fig_labels = {}
- label_results = {}
- # Measurements exist. Load them.
- if exists(self.labels_json_fid):
- # Loads the label list, names, and results
- label_results = ds_utils.read_json(self.labels_json_fid)
- # Image exists. Load it.
- if exists(self.labels_fig_json_fid):
- fig_labels = ds_utils.read_plotly(self.labels_fig_json_fid)
- return fig_labels, label_results
-
- def _prepare_labels(self):
- """Loads a Labels object and computes label statistics"""
- # Label object for the dataset
- label_obj = Labels(dataset=self.dset,
- dataset_name=self.dset_name,
- config_name=self.dset_config)
- # TODO: Handle the case where there are multiple label columns.
- # The logic throughout the code assumes only one.
- if type(self.label_field) == tuple:
- label_field = self.label_field[0]
- elif type(self.label_field) == str:
- label_field = self.label_field
- else:
- logs.warning("Unexpected format %s for label column name(s). "
- "Not computing label statistics." %
- type(self.label_field))
- return {}
- label_results = label_obj.prepare_labels(label_field, self.label_names)
- return label_results
-
- def _write_label_cache(self):
- ds_utils.make_path(pjoin(self.cache_dir, self.label_dir))
- if self.label_results:
- ds_utils.write_json(self.label_results, self.labels_json_fid)
- if self.fig_labels:
- ds_utils.write_plotly(self.fig_labels, self.labels_fig_json_fid)
- self.fig_labels.write_html(self.labels_fig_html_fid)
-
- def get_label_filenames(self):
- label_fid_dict = {"statistics": self.labels_json_fid,
- "figure json": self.labels_fig_json_fid,
- "figure html": self.labels_fig_html_fid}
- return label_fid_dict
-
-
-class Labels:
- """Generic class for label processing.
- Uses the Dataset to extract the label column and compute label measurements.
- """
-
- def __init__(self, dataset, dataset_name=None, config_name=None):
- # Input HuggingFace Dataset.
- self.dset = dataset
- # These are used to extract label names, when the label names
- # are stored in the Dataset object but not in the "label" column
- # we are working with, which may instead just be ints corresponding to
- # the names
- self.ds_name = dataset_name
- self.config_name = config_name
- # For measurement data and additional metadata.
- self.label_results_dict = {}
-
- def prepare_labels(self, label_field, label_names=[]):
- """ Uses the evaluate library to return the label distribution. """
- logs.info("Inside main label calculation function.")
- logs.debug("Looking for label field called '%s'" % label_field)
- # The input Dataset object
- # When the label field is not found, an error will be thrown.
- if label_field in self.dset.features:
- label_list = self.dset[label_field]
- else:
- logs.warning("No label column found -- nothing to do. Returning.")
- logs.debug(self.dset.features)
- return {}
- # Get the evaluate library's measurement for label distro.
- label_distribution = evaluate.load(EVAL_LABEL_MEASURE)
- # Measure the label distro.
- label_measurement = label_distribution.compute(data=label_list)
- # TODO: Incorporate this summation into what the evaluate library returns.
- label_sum_dict = Counter(label_list)
- label_sums = [label_sum_dict[key] for key in sorted(label_sum_dict)]
- label_measurement["sums"] = label_sums
- if not label_names:
- # Have to extract the label names from the Dataset object when the
- # actual dataset columns are just ints representing the label names.
- label_names = extract_label_names(label_field, self.ds_name,
- self.config_name)
- label_results = make_label_results_dict(label_measurement, label_names)
- return label_results
diff --git a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/process_data/dedup_data.py b/spaces/ICML2022/OFA/fairseq/examples/m2m_100/process_data/dedup_data.py
deleted file mode 100644
index 58d9ed1cd17b3ba70772a6d9adab709785495fd9..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/process_data/dedup_data.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import argparse
-from collections import namedtuple
-import os
-
-DATADIR = "/path/to/train_data"
-DEDUP_FROM_DIR = "/path/to/eval/data"
-OUTPUT_DIR = "/path/to/output/data"
-
-
-def main(args):
- languages = set()
- for language_directory in os.listdir(DATADIR):
- if "_" in language_directory:
- src, tgt = language_directory.split("_")
- languages.add(LanguagePair(src=src, tgt=tgt))
-
- data = existing_data()
- train_languages = sorted(languages)
- for language_pair in train_languages[args.start_index:args.start_index + args.size]:
- print(language_pair)
- dedup(language_pair, data)
-
-
-LanguagePair = namedtuple("LanguagePair", ["src", "tgt"])
-
-
-def existing_data():
- data = set()
- for file in os.listdir(DEDUP_FROM_DIR):
- with open(os.path.join(DEDUP_FROM_DIR, file)) as f:
- data |= set(f.readlines())
- return data
-
-def dedup(language_pair, data, verbose=True, output=True):
- train_filenames = LanguagePair(
- src=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.src}",
- tgt=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.tgt}",
- )
-
- output_filenames = LanguagePair(
- src=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.src}",
- tgt=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.tgt}"
- )
-
- # If output exists, skip this pair. It has already been done.
- if (os.path.exists(output_filenames.src) and
- os.path.exists(output_filenames.tgt)):
- if verbose:
- print(f"{language_pair.src}-{language_pair.tgt} already done.")
- return
-
- if verbose:
- print(f"{language_pair.src}-{language_pair.tgt} ready, will check dups.")
-
- # If there is no output, no need to actually do the loop.
- if not output:
- return
-
- if os.path.exists(train_filenames.src) and os.path.exists(train_filenames.tgt):
- with open(train_filenames.src) as f:
- train_source = f.readlines()
-
- with open(train_filenames.tgt) as f:
- train_target = f.readlines()
-
- # do dedup
- new_train_source = []
- new_train_target = []
- for i, train_line in enumerate(train_source):
- if train_line not in data and train_target[i] not in data:
- new_train_source.append(train_line)
- new_train_target.append(train_target[i])
-
- assert len(train_source) == len(train_target)
- assert len(new_train_source) == len(new_train_target)
- assert len(new_train_source) <= len(train_source)
-
- with open(output_filenames.src, "w") as o:
- for line in new_train_source:
- o.write(line)
-
- with open(output_filenames.tgt, "w") as o:
- for line in new_train_target:
- o.write(line)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("-s", "--start-index", required=True, type=int)
- parser.add_argument("-n", "--size", required=True, type=int)
- main(parser.parse_args())
diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/vads.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/vads.py
deleted file mode 100644
index 2398da97d8c44b8f3f270b22d5508a003482b4d6..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/vads.py
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import sys
-
-from copy import deepcopy
-from scipy.signal import lfilter
-
-import numpy as np
-from tqdm import tqdm
-import soundfile as sf
-import os.path as osp
-
-
-def get_parser():
- parser = argparse.ArgumentParser(description="compute vad segments")
- parser.add_argument(
- "--rvad-home",
- "-r",
- help="path to rvad home (see https://github.com/zhenghuatan/rVADfast)",
- required=True,
- )
-
- return parser
-
-
-def rvad(speechproc, path):
- winlen, ovrlen, pre_coef, nfilter, nftt = 0.025, 0.01, 0.97, 20, 512
- ftThres = 0.5
- vadThres = 0.4
- opts = 1
-
- data, fs = sf.read(path)
- assert fs == 16_000, "sample rate must be 16khz"
- ft, flen, fsh10, nfr10 = speechproc.sflux(data, fs, winlen, ovrlen, nftt)
-
- # --spectral flatness --
- pv01 = np.zeros(ft.shape[0])
- pv01[np.less_equal(ft, ftThres)] = 1
- pitch = deepcopy(ft)
-
- pvblk = speechproc.pitchblockdetect(pv01, pitch, nfr10, opts)
-
- # --filtering--
- ENERGYFLOOR = np.exp(-50)
- b = np.array([0.9770, -0.9770])
- a = np.array([1.0000, -0.9540])
- fdata = lfilter(b, a, data, axis=0)
-
- # --pass 1--
- noise_samp, noise_seg, n_noise_samp = speechproc.snre_highenergy(
- fdata, nfr10, flen, fsh10, ENERGYFLOOR, pv01, pvblk
- )
-
- # sets noisy segments to zero
- for j in range(n_noise_samp):
- fdata[range(int(noise_samp[j, 0]), int(noise_samp[j, 1]) + 1)] = 0
-
- vad_seg = speechproc.snre_vad(
- fdata, nfr10, flen, fsh10, ENERGYFLOOR, pv01, pvblk, vadThres
- )
- return vad_seg, data
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- sys.path.append(args.rvad_home)
- import speechproc
-
- stride = 160
- lines = sys.stdin.readlines()
- root = lines[0].rstrip()
- for fpath in tqdm(lines[1:]):
- path = osp.join(root, fpath.split()[0])
- vads, wav = rvad(speechproc, path)
-
- start = None
- vad_segs = []
- for i, v in enumerate(vads):
- if start is None and v == 1:
- start = i * stride
- elif start is not None and v == 0:
- vad_segs.append((start, i * stride))
- start = None
- if start is not None:
- vad_segs.append((start, len(wav)))
-
- print(" ".join(f"{v[0]}:{v[1]}" for v in vad_segs))
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Jamel887/Rvc-tio887/vc_infer_pipeline.py b/spaces/Jamel887/Rvc-tio887/vc_infer_pipeline.py
deleted file mode 100644
index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000
--- a/spaces/Jamel887/Rvc-tio887/vc_infer_pipeline.py
+++ /dev/null
@@ -1,443 +0,0 @@
-import numpy as np, parselmouth, torch, pdb, sys, os
-from time import time as ttime
-import torch.nn.functional as F
-import scipy.signal as signal
-import pyworld, os, traceback, faiss, librosa, torchcrepe
-from scipy import signal
-from functools import lru_cache
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav = {}
-
-
-@lru_cache
-def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period):
- audio = input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-
-def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(
- y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
- ) # 每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
- rms1 = torch.from_numpy(rms1)
- rms1 = F.interpolate(
- rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.from_numpy(rms2)
- rms2 = F.interpolate(
- rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
- data2 *= (
- torch.pow(rms1, torch.tensor(1 - rate))
- * torch.pow(rms2, torch.tensor(rate - 1))
- ).numpy()
- return data2
-
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- def get_f0(
- self,
- input_audio_path,
- x,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- inp_f0=None,
- ):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
- if filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "crepe":
- model = "full"
- # Pick a batch size that doesn't cause memory errors on your gpu
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- elif f0_method == "rmvpe":
- if hasattr(self, "model_rmvpe") == False:
- from rmvpe import RMVPE
-
- print("loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "rmvpe.pt", is_half=self.is_half, device=self.device
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0]) if version == "v1" else logits[0]
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = feats.clone()
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute(
- 0, 2, 1
- )
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
-
- if protect < 0.5 and pitch != None and pitchf != None:
- pitchff = pitchf.clone()
- pitchff[pitchf > 0] = 1
- pitchff[pitchf < 1] = protect
- pitchff = pitchff.unsqueeze(-1)
- feats = feats * pitchff + feats0 * (1 - pitchff)
- feats = feats.to(feats0.dtype)
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy()
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(
- input_audio_path,
- audio_pad,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- inp_f0,
- )
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps":
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- if rms_mix_rate != 1:
- audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
- if resample_sr >= 16000 and tgt_sr != resample_sr:
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- audio_max = np.abs(audio_opt).max() / 0.99
- max_int16 = 32768
- if audio_max > 1:
- max_int16 /= audio_max
- audio_opt = (audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/Jamkonams/AutoGPT/ui/utils.py b/spaces/Jamkonams/AutoGPT/ui/utils.py
deleted file mode 100644
index 71703e2009afac0582300f5d99a91ddec4119e04..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/ui/utils.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import os
-import re
-
-def format_directory(directory):
- output = []
- def helper(directory, level, output):
- files = os.listdir(directory)
- for i, item in enumerate(files):
- is_folder = os.path.isdir(os.path.join(directory, item))
- joiner = "├── " if i < len(files) - 1 else "└── "
- item_html = item + "/" if is_folder else f"{item}"
- output.append("│ " * level + joiner + item_html)
- if is_folder:
- helper(os.path.join(directory, item), level + 1, output)
- output.append(os.path.basename(directory) + "/")
- helper(directory, 1, output)
- return "\n".join(output)
-
-DOWNLOAD_OUTPUTS_JS = """
-() => {
- const a = document.createElement('a');
- a.href = 'file=outputs.zip';
- a.download = 'outputs.zip';
- document.body.appendChild(a);
- a.click();
- document.body.removeChild(a);
-}"""
-
-def remove_color(text):
- ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
- return ansi_escape.sub('', text)
\ No newline at end of file
diff --git a/spaces/JavaFXpert/GPT-3.5-Table-inator/README.md b/spaces/JavaFXpert/GPT-3.5-Table-inator/README.md
deleted file mode 100644
index bb945b202c61d2c41a789964e9f9e71bd5c390e4..0000000000000000000000000000000000000000
--- a/spaces/JavaFXpert/GPT-3.5-Table-inator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: GPT 3.5 Table Inator
-emoji: 💩
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JeffJing/ZookChatBot/steamship/base/__init__.py b/spaces/JeffJing/ZookChatBot/steamship/base/__init__.py
deleted file mode 100644
index e0b81945821df65923697b57a70ae6642eeab8d8..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/base/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from .configuration import Configuration
-from .environments import RuntimeEnvironments, check_environment
-from .error import SteamshipError
-from .mime_types import MimeTypes
-from .tasks import Task, TaskState
-
-__all__ = [
- "Configuration",
- "SteamshipError",
- "Task",
- "TaskState",
- "MimeTypes",
- "RuntimeEnvironments",
- "check_environment",
-]
diff --git a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/app.py b/spaces/JohnCalimoso/animalbreedidentificationversion1.5/app.py
deleted file mode 100644
index 869db191f2f4c0de6a358de7ee47eabe97c6bc25..0000000000000000000000000000000000000000
--- a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/app.py
+++ /dev/null
@@ -1,236 +0,0 @@
-import streamlit as st
-import cv2
-from PIL import Image
-import numpy as np
-import time
-
-
-
-def main():
- # basic page configuration
- st.set_page_config(
- page_title="ABI",
- page_icon="🐾"
- )
-
- st.title("Animal Breed Identification")
-
- animal_chs = st.sidebar.selectbox("Select Animal", ("Guinea Pig","Hamster","Spider","Rabbit","Snake")) # This is the side bar selection
-
- aimodel_chs = st.sidebar.selectbox("Select Identifier", ("Image Wizard","Smart Recommendation","Easy Decision Maker","Combine Insight"))
- # a function for uploading files
- def upload_file():
- uploaded_file_toplabel = f'What Breed of {animal_chs}?'
- uploaded_file = st.file_uploader( uploaded_file_toplabel, type=["jpg", "jpeg","png"])
- return uploaded_file
-
- # a function for using the camera
- def using_camera():
- uploaded_file_toplabel = f'What Breed of {animal_chs}?'
- captured_data = st.camera_input(uploaded_file_toplabel, key="camera_capture", disabled=False)
- return captured_data
-
- warning = st.warning('Please allow this page to access the camera', icon="⚠️")
-
- option = st.radio("Choose an option", ("Upload", "Camera"))
- # conditional statement for choosing to upload or using the camera
-
- if option == "Upload":
- captured_img = upload_file()
- else:
- captured_img = using_camera()
-
- c1, c2= st.columns(2) # this gives us a two column, one for input and the other one is for the result
- if captured_img is not None:
- im= Image.open(captured_img)
- img= np.asarray(im)
- image= cv2.resize(img,(256, 256))
- img= np.expand_dims(img, 0)
- c1.header('Input Image')
- c1.image(im)
-
- if captured_img is not None:
- c2.header('Identified As:')
- identified_as = ''
- prob_perc = 0
- # model
- if animal_chs == "Guinea Pig":
- if aimodel_chs == "Image Wizard":
- from Control.Guineapig.con_guineapig_resnet import gpResNet
- prediction = gpResNet(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
-
- elif aimodel_chs == "Smart Recommendation":
- from Control.Guineapig.con_guineapig_SVM import gpSVM
- prediction = gpSVM(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
-
- elif aimodel_chs == "Easy Decision Maker":
- from Control.Guineapig.con_guineapig_logreg import gpLogReg
- prediction = gpLogReg(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- else:
- from Control.Guineapig.con_guineapig_ensemble import gpEnsemble
- prediction = gpEnsemble(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
-
- elif animal_chs == "Hamster":
- if aimodel_chs == "Image Wizard":
- from Control.Hamster.con_hamster_resnet import hamsterResnet
- prediction = hamsterResnet(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Smart Recommendation":
- from Control.Hamster.con_hamster_SVM import hamsterSVM
- prediction = hamsterSVM(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Easy Decision Maker":
- from Control.Hamster.con_hamster_logreg import hamsterLogReg
- prediction = hamsterLogReg(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- else:
- from Control.Hamster.con_hamster_ensemble import hamsterEnsemble
- prediction = hamsterEnsemble(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
-
- elif animal_chs == "Spider":
- if aimodel_chs == "Image Wizard":
- from Control.Spider.con_spider_resnet import spiderResnet
- prediction = spiderResnet(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Smart Recommendation":
- from Control.Spider.con_spider_SVM import spiderSVM
- prediction = spiderSVM(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Easy Decision Maker":
- from Control.Spider.con_spider_logreg import spiderLogReg
- prediction = spiderLogReg(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- else:
- from Control.Spider.con_spider_ensemble import spiderEnsemble
- prediction = spiderEnsemble(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
-
- elif animal_chs == "Rabbit":
- if aimodel_chs == "Image Wizard":
- from Control.Rabbit.con_rabbit_resnet import rabbitResnet
- prediction = rabbitResnet(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Smart Recommendation":
- from Control.Rabbit.con_rabbit_SVM import rabbitSVM
- prediction = rabbitSVM(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Easy Decision Maker":
- from Control.Rabbit.con_rabbit_logreg import rabbitsLogReg
- prediction = rabbitsLogReg(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- else:
- from Control.Rabbit.con_rabbit_ensemble import rabbitEnsemble
- prediction = rabbitEnsemble(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
-
- elif animal_chs == "Snake":
- if aimodel_chs == "Image Wizard":
- from Control.Snake.con_snake_resnet import snakeResnet
- prediction = snakeResnet(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Smart Recommendation":
- from Control.Snake.con_snake_SVM import snakeSVM
- prediction = snakeSVM(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- elif aimodel_chs == "Easy Decision Maker":
- from Control.Snake.con_snake_logreg import snakeLogReg
- prediction = snakeLogReg(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
- else:
- from Control.Snake.con_snake_ensemble import snakeEnsemble
- prediction = snakeEnsemble(captured_img)
- result = prediction.predict_image()
- identified_as = result[0]
- prob_perc = result[1]
-
-
- c2.subheader(identified_as)
- c2.subheader("{:.2%}".format(prob_perc))
- # loading function
- # with st.spinner('Wait for it...'):
- # time.sleep(10)
- st.success('Done!')
-
-
-
- # Footer
- hide_footer = """
-
-
- """
-
- # this will implement the markdown code in the website
- # st.markdown(hide_footer, unsafe_allow_html= True)
-
-if __name__== '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/korean.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/korean.py
deleted file mode 100644
index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/korean.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import re
-from jamo import h2j, j2hcj
-import ko_pron
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (ipa, lazy ipa) pairs:
-_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('t͡ɕ','ʧ'),
- ('d͡ʑ','ʥ'),
- ('ɲ','n^'),
- ('ɕ','ʃ'),
- ('ʷ','w'),
- ('ɭ','l`'),
- ('ʎ','ɾ'),
- ('ɣ','ŋ'),
- ('ɰ','ɯ'),
- ('ʝ','j'),
- ('ʌ','ə'),
- ('ɡ','g'),
- ('\u031a','#'),
- ('\u0348','='),
- ('\u031e',''),
- ('\u0320',''),
- ('\u0339','')
-]]
-
-
-def latin_to_hangul(text):
- for regex, replacement in _latin_to_hangul:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def divide_hangul(text):
- text = j2hcj(h2j(text))
- for regex, replacement in _hangul_divided:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def hangul_number(num, sino=True):
- '''Reference https://github.com/Kyubyong/g2pK'''
- num = re.sub(',', '', num)
-
- if num == '0':
- return '영'
- if not sino and num == '20':
- return '스무'
-
- digits = '123456789'
- names = '일이삼사오육칠팔구'
- digit2name = {d: n for d, n in zip(digits, names)}
-
- modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉'
- decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔'
- digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())}
- digit2dec = {d: dec for d, dec in zip(digits, decimals.split())}
-
- spelledout = []
- for i, digit in enumerate(num):
- i = len(num) - i - 1
- if sino:
- if i == 0:
- name = digit2name.get(digit, '')
- elif i == 1:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- else:
- if i == 0:
- name = digit2mod.get(digit, '')
- elif i == 1:
- name = digit2dec.get(digit, '')
- if digit == '0':
- if i % 4 == 0:
- last_three = spelledout[-min(3, len(spelledout)):]
- if ''.join(last_three) == '':
- spelledout.append('')
- continue
- else:
- spelledout.append('')
- continue
- if i == 2:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 3:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 4:
- name = digit2name.get(digit, '') + '만'
- name = name.replace('일만', '만')
- elif i == 5:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- elif i == 6:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 7:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 8:
- name = digit2name.get(digit, '') + '억'
- elif i == 9:
- name = digit2name.get(digit, '') + '십'
- elif i == 10:
- name = digit2name.get(digit, '') + '백'
- elif i == 11:
- name = digit2name.get(digit, '') + '천'
- elif i == 12:
- name = digit2name.get(digit, '') + '조'
- elif i == 13:
- name = digit2name.get(digit, '') + '십'
- elif i == 14:
- name = digit2name.get(digit, '') + '백'
- elif i == 15:
- name = digit2name.get(digit, '') + '천'
- spelledout.append(name)
- return ''.join(elem for elem in spelledout)
-
-
-def number_to_hangul(text):
- '''Reference https://github.com/Kyubyong/g2pK'''
- tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text))
- for token in tokens:
- num, classifier = token
- if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers:
- spelledout = hangul_number(num, sino=False)
- else:
- spelledout = hangul_number(num, sino=True)
- text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}')
- # digit by digit for remaining digits
- digits = '0123456789'
- names = '영일이삼사오육칠팔구'
- for d, n in zip(digits, names):
- text = text.replace(d, n)
- return text
-
-
-def korean_to_lazy_ipa(text):
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text)
- for regex, replacement in _ipa_to_lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def korean_to_ipa(text):
- text = korean_to_lazy_ipa(text)
- return text.replace('ʧ','tʃ').replace('ʥ','dʑ')
diff --git a/spaces/KPCGD/bingo/src/lib/bots/bing/types.ts b/spaces/KPCGD/bingo/src/lib/bots/bing/types.ts
deleted file mode 100644
index 02cd5e8b01e3529642d28dc1539bf958f4ac420b..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/lib/bots/bing/types.ts
+++ /dev/null
@@ -1,259 +0,0 @@
-export type Author = 'user' | 'system' | 'bot'
-
-export type BotId = 'bing'
-
-export enum BingConversationStyle {
- Creative = 'Creative',
- Balanced = 'Balanced',
- Precise = 'Precise'
-}
-
-export enum ErrorCode {
- CONVERSATION_LIMIT = 'CONVERSATION_LIMIT',
- BING_UNAUTHORIZED = 'BING_UNAUTHORIZED',
- BING_FORBIDDEN = 'BING_FORBIDDEN',
- BING_CAPTCHA = 'BING_CAPTCHA',
- THROTTLE_LIMIT = 'THROTTLE_LIMIT',
- NOTFOUND_ERROR = 'NOT_FOUND_ERROR',
- UNKOWN_ERROR = 'UNKOWN_ERROR',
- NETWORK_ERROR = 'NETWORK_ERROR',
-}
-
-export class ChatError extends Error {
- code: ErrorCode
- constructor(message: string, code: ErrorCode) {
- super(message)
- this.code = code
- }
-}
-
-export type ChatMessageModel = {
- id: string
- author: Author
- text: string
- error?: ChatError
- throttling?: Throttling
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
-}
-
-export interface ConversationModel {
- messages: ChatMessageModel[]
-}
-
-export type Event =
- | {
- type: 'UPDATE_ANSWER'
- data: {
- text: string
- spokenText?: string
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
- throttling?: Throttling
- }
- }
- | {
- type: 'DONE'
- }
- | {
- type: 'ERROR'
- error: ChatError
- }
-
-export interface SendMessageParams {
- prompt: string
- imageUrl?: string
- options: T
- onEvent: (event: Event) => void
- signal?: AbortSignal
-}
-
-export interface ConversationResponse {
- conversationId: string
- clientId: string
- conversationSignature: string
- result: {
- value: string
- message?: string
- }
-}
-
-export interface Telemetry {
- metrics?: null
- startTime: string
-}
-
-export interface ChatUpdateArgument {
- messages?: ChatResponseMessage[]
- throttling?: Throttling
- requestId: string
- result: null
-}
-
-export type ChatUpdateCompleteResponse = {
- type: 2
- invocationId: string
- item: ChatResponseItem
-} | {
- type: 1
- target: string
- arguments: ChatUpdateArgument[]
-} | {
- type: 3
- invocationId: string
-} | {
- type: 6 | 7
-}
-
-export interface ChatRequestResult {
- value: string
- serviceVersion: string
- error?: string
-}
-
-export interface ChatResponseItem {
- messages: ChatResponseMessage[]
- firstNewMessageIndex: number
- suggestedResponses: null
- conversationId: string
- requestId: string
- conversationExpiryTime: string
- telemetry: Telemetry
- result: ChatRequestResult
- throttling: Throttling
-}
-export enum InvocationEventType {
- Invocation = 1,
- StreamItem = 2,
- Completion = 3,
- StreamInvocation = 4,
- CancelInvocation = 5,
- Ping = 6,
- Close = 7,
-}
-
-// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts
-
-export interface ConversationInfo {
- conversationId: string
- clientId: string
- conversationSignature: string
- invocationId: number
- conversationStyle: BingConversationStyle
- prompt: string
- imageUrl?: string
-}
-
-export interface BingChatResponse {
- conversationSignature: string
- conversationId: string
- clientId: string
- invocationId: number
- conversationExpiryTime: Date
- response: string
- details: ChatResponseMessage
-}
-
-export interface Throttling {
- maxNumLongDocSummaryUserMessagesInConversation: number
- maxNumUserMessagesInConversation: number
- numLongDocSummaryUserMessagesInConversation: number
- numUserMessagesInConversation: number
-}
-
-export interface ChatResponseMessage {
- text: string
- spokenText?: string
- author: string
- createdAt: Date
- timestamp: Date
- messageId: string
- requestId: string
- offense: string
- adaptiveCards: AdaptiveCard[]
- sourceAttributions: SourceAttribution[]
- feedback: Feedback
- contentOrigin: string
- messageType?: string
- contentType?: string
- privacy: null
- suggestedResponses: SuggestedResponse[]
-}
-
-export interface AdaptiveCard {
- type: string
- version: string
- body: Body[]
-}
-
-export interface Body {
- type: string
- text: string
- wrap: boolean
- size?: string
-}
-
-export interface Feedback {
- tag: null
- updatedOn: null
- type: string
-}
-
-export interface SourceAttribution {
- providerDisplayName: string
- seeMoreUrl: string
- searchQuery: string
-}
-
-export interface SuggestedResponse {
- text: string
- author?: Author
- createdAt?: Date
- timestamp?: Date
- messageId?: string
- messageType?: string
- offense?: string
- feedback?: Feedback
- contentOrigin?: string
- privacy?: null
-}
-
-export interface KBlobRequest {
- knowledgeRequest: KnowledgeRequestContext
- imageBase64?: string
-}
-
-export interface KBlobResponse {
- blobId: string
- processedBlobId?: string
-}
-
-export interface KnowledgeRequestContext {
- imageInfo: ImageInfo;
- knowledgeRequest: KnowledgeRequest;
-}
-
-export interface ImageInfo {
- url?: string;
-}
-
-export interface KnowledgeRequest {
- invokedSkills: string[];
- subscriptionId: string;
- invokedSkillsRequestData: InvokedSkillsRequestData;
- convoData: ConvoData;
-}
-
-export interface ConvoData {
- convoid: string;
- convotone: BingConversationStyle;
-}
-
-export interface InvokedSkillsRequestData {
- enableFaceBlur: boolean;
-}
-
-export interface FileItem {
- url: string;
- status?: 'loading' | 'error' | 'loaded'
-}
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py
deleted file mode 100644
index 823b44fb64898e8dcbb12180ba45d1718f9b03f7..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_537238KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 64)
- self.stg1_high_band_net = BaseASPPNet(2, 64)
-
- self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(32, 64)
-
- self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(64, 128)
-
- self.out = nn.Conv2d(128, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(64, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/Kevin676/AutoGPT/tests/unit/test_browse_scrape_text.py b/spaces/Kevin676/AutoGPT/tests/unit/test_browse_scrape_text.py
deleted file mode 100644
index fea5ebfc05d466c7cb5711b5ac10e2ea102ddc45..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/tests/unit/test_browse_scrape_text.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Generated by CodiumAI
-
-import requests
-
-from autogpt.commands.web_requests import scrape_text
-
-"""
-Code Analysis
-
-Objective:
-The objective of the "scrape_text" function is to scrape the text content from
-a given URL and return it as a string, after removing any unwanted HTML tags and scripts.
-
-Inputs:
-- url: a string representing the URL of the webpage to be scraped.
-
-Flow:
-1. Send a GET request to the given URL using the requests library and the user agent header from the config file.
-2. Check if the response contains an HTTP error. If it does, return an error message.
-3. Use BeautifulSoup to parse the HTML content of the response and extract all script and style tags.
-4. Get the text content of the remaining HTML using the get_text() method of BeautifulSoup.
-5. Split the text into lines and then into chunks, removing any extra whitespace.
-6. Join the chunks into a single string with newline characters between them.
-7. Return the cleaned text.
-
-Outputs:
-- A string representing the cleaned text content of the webpage.
-
-Additional aspects:
-- The function uses the requests library and BeautifulSoup to handle the HTTP request and HTML parsing, respectively.
-- The function removes script and style tags from the HTML to avoid including unwanted content in the text output.
-- The function uses a generator expression to split the text into lines and chunks, which can improve performance for large amounts of text.
-"""
-
-
-class TestScrapeText:
- # Tests that scrape_text() returns the expected text when given a valid URL.
- def test_scrape_text_with_valid_url(self, mocker):
- # Mock the requests.get() method to return a response with expected text
- expected_text = "This is some sample text"
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = f"
{expected_text}
"
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a valid URL and assert that it returns the expected text
- url = "http://www.example.com"
- assert scrape_text(url) == expected_text
-
- # Tests that the function returns an error message when an invalid or unreachable url is provided.
- def test_invalid_url(self, mocker):
- # Mock the requests.get() method to raise an exception
- mocker.patch(
- "requests.Session.get", side_effect=requests.exceptions.RequestException
- )
-
- # Call the function with an invalid URL and assert that it returns an error message
- url = "http://www.invalidurl.com"
- error_message = scrape_text(url)
- assert "Error:" in error_message
-
- # Tests that the function returns an empty string when the html page contains no text to be scraped.
- def test_no_text(self, mocker):
- # Mock the requests.get() method to return a response with no text
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = ""
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a valid URL and assert that it returns an empty string
- url = "http://www.example.com"
- assert scrape_text(url) == ""
-
- # Tests that the function returns an error message when the response status code is an http error (>=400).
- def test_http_error(self, mocker):
- # Mock the requests.get() method to return a response with a 404 status code
- mocker.patch("requests.Session.get", return_value=mocker.Mock(status_code=404))
-
- # Call the function with a URL
- result = scrape_text("https://www.example.com")
-
- # Check that the function returns an error message
- assert result == "Error: HTTP 404 error"
-
- # Tests that scrape_text() properly handles HTML tags.
- def test_scrape_text_with_html_tags(self, mocker):
- # Create a mock response object with HTML containing tags
- html = "
This is bold text.
"
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = html
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a URL
- result = scrape_text("https://www.example.com")
-
- # Check that the function properly handles HTML tags
- assert result == "This is bold text."
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/jquery.js b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/jquery.js
deleted file mode 100644
index fc6c299b73e792ef288e785c22393a5df9dded4b..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/jquery.js
+++ /dev/null
@@ -1,10881 +0,0 @@
-/*!
- * jQuery JavaScript Library v3.6.0
- * https://jquery.com/
- *
- * Includes Sizzle.js
- * https://sizzlejs.com/
- *
- * Copyright OpenJS Foundation and other contributors
- * Released under the MIT license
- * https://jquery.org/license
- *
- * Date: 2021-03-02T17:08Z
- */
-( function( global, factory ) {
-
- "use strict";
-
- if ( typeof module === "object" && typeof module.exports === "object" ) {
-
- // For CommonJS and CommonJS-like environments where a proper `window`
- // is present, execute the factory and get jQuery.
- // For environments that do not have a `window` with a `document`
- // (such as Node.js), expose a factory as module.exports.
- // This accentuates the need for the creation of a real `window`.
- // e.g. var jQuery = require("jquery")(window);
- // See ticket #14549 for more info.
- module.exports = global.document ?
- factory( global, true ) :
- function( w ) {
- if ( !w.document ) {
- throw new Error( "jQuery requires a window with a document" );
- }
- return factory( w );
- };
- } else {
- factory( global );
- }
-
-// Pass this if window is not defined yet
-} )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) {
-
-// Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1
-// throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode
-// arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common
-// enough that all such attempts are guarded in a try block.
-"use strict";
-
-var arr = [];
-
-var getProto = Object.getPrototypeOf;
-
-var slice = arr.slice;
-
-var flat = arr.flat ? function( array ) {
- return arr.flat.call( array );
-} : function( array ) {
- return arr.concat.apply( [], array );
-};
-
-
-var push = arr.push;
-
-var indexOf = arr.indexOf;
-
-var class2type = {};
-
-var toString = class2type.toString;
-
-var hasOwn = class2type.hasOwnProperty;
-
-var fnToString = hasOwn.toString;
-
-var ObjectFunctionString = fnToString.call( Object );
-
-var support = {};
-
-var isFunction = function isFunction( obj ) {
-
- // Support: Chrome <=57, Firefox <=52
- // In some browsers, typeof returns "function" for HTML